00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3477 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3088 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.020 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.021 The recommended git tool is: git 00:00:00.021 using credential 00000000-0000-0000-0000-000000000002 00:00:00.023 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.039 Fetching changes from the remote Git repository 00:00:00.040 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.069 Using shallow fetch with depth 1 00:00:00.069 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.069 > git --version # timeout=10 00:00:00.130 > git --version # 'git version 2.39.2' 00:00:00.130 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.131 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.131 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.788 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.799 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.813 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:02.813 > git config core.sparsecheckout # timeout=10 00:00:02.827 > git read-tree -mu HEAD # timeout=10 00:00:02.846 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:02.863 Commit message: "inventory/dev: add missing long names" 00:00:02.864 > git rev-list --no-walk c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=10 00:00:02.981 [Pipeline] Start of Pipeline 00:00:02.995 [Pipeline] library 00:00:02.997 Loading library shm_lib@master 00:00:02.997 Library shm_lib@master is cached. Copying from home. 00:00:03.012 [Pipeline] node 00:00:03.018 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.020 [Pipeline] { 00:00:03.029 [Pipeline] catchError 00:00:03.030 [Pipeline] { 00:00:03.044 [Pipeline] wrap 00:00:03.055 [Pipeline] { 00:00:03.060 [Pipeline] stage 00:00:03.063 [Pipeline] { (Prologue) 00:00:03.287 [Pipeline] sh 00:00:03.568 + logger -p user.info -t JENKINS-CI 00:00:03.587 [Pipeline] echo 00:00:03.589 Node: GP6 00:00:03.599 [Pipeline] sh 00:00:03.892 [Pipeline] setCustomBuildProperty 00:00:03.905 [Pipeline] echo 00:00:03.907 Cleanup processes 00:00:03.913 [Pipeline] sh 00:00:04.198 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.198 1063777 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.210 [Pipeline] sh 00:00:04.485 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.485 ++ grep -v 'sudo pgrep' 00:00:04.485 ++ awk '{print $1}' 00:00:04.485 + sudo kill -9 00:00:04.485 + true 00:00:04.498 [Pipeline] cleanWs 00:00:04.507 [WS-CLEANUP] Deleting project workspace... 00:00:04.507 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.512 [WS-CLEANUP] done 00:00:04.516 [Pipeline] setCustomBuildProperty 00:00:04.530 [Pipeline] sh 00:00:04.805 + sudo git config --global --replace-all safe.directory '*' 00:00:04.873 [Pipeline] nodesByLabel 00:00:04.874 Found a total of 1 nodes with the 'sorcerer' label 00:00:04.883 [Pipeline] httpRequest 00:00:04.887 HttpMethod: GET 00:00:04.888 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:04.890 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:04.917 Response Code: HTTP/1.1 200 OK 00:00:04.917 Success: Status code 200 is in the accepted range: 200,404 00:00:04.918 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:20.150 [Pipeline] sh 00:00:20.421 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:20.438 [Pipeline] httpRequest 00:00:20.441 HttpMethod: GET 00:00:20.442 URL: http://10.211.164.101/packages/spdk_253cca4fc3a89c38e79d2e940c5a0b7bb082afcc.tar.gz 00:00:20.442 Sending request to url: http://10.211.164.101/packages/spdk_253cca4fc3a89c38e79d2e940c5a0b7bb082afcc.tar.gz 00:00:20.452 Response Code: HTTP/1.1 200 OK 00:00:20.452 Success: Status code 200 is in the accepted range: 200,404 00:00:20.453 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_253cca4fc3a89c38e79d2e940c5a0b7bb082afcc.tar.gz 00:01:46.514 [Pipeline] sh 00:01:46.793 + tar --no-same-owner -xf spdk_253cca4fc3a89c38e79d2e940c5a0b7bb082afcc.tar.gz 00:01:49.330 [Pipeline] sh 00:01:49.605 + git -C spdk log --oneline -n5 00:01:49.605 253cca4fc nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:01:49.605 c3870302f scripts/pkgdep: Fix install_shfmt() under FreeBSD 00:01:49.605 b65c4a87a scripts/pkgdep: Remove UADK from install_all_dependencies() 00:01:49.605 7a8d39909 Revert "test/common: Enable inherit_errexit" 00:01:49.605 4506c0c36 test/common: Enable inherit_errexit 00:01:49.622 [Pipeline] withCredentials 00:01:49.632 > git --version # timeout=10 00:01:49.643 > git --version # 'git version 2.39.2' 00:01:49.656 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:49.657 [Pipeline] { 00:01:49.665 [Pipeline] retry 00:01:49.666 [Pipeline] { 00:01:49.679 [Pipeline] sh 00:01:49.952 + git ls-remote http://dpdk.org/git/dpdk main 00:01:49.963 [Pipeline] } 00:01:49.977 [Pipeline] // retry 00:01:49.983 [Pipeline] } 00:01:49.994 [Pipeline] // withCredentials 00:01:50.003 [Pipeline] httpRequest 00:01:50.007 HttpMethod: GET 00:01:50.007 URL: http://10.211.164.101/packages/dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:01:50.008 Sending request to url: http://10.211.164.101/packages/dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:01:50.011 Response Code: HTTP/1.1 200 OK 00:01:50.011 Success: Status code 200 is in the accepted range: 200,404 00:01:50.011 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:01:57.444 [Pipeline] sh 00:01:57.720 + tar --no-same-owner -xf dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:01:59.103 [Pipeline] sh 00:01:59.379 + git -C dpdk log --oneline -n5 00:01:59.379 7e06c0de19 examples: move alignment attribute on types for MSVC 00:01:59.379 27595cd830 drivers: move alignment attribute on types for MSVC 00:01:59.379 0efea35a2b app: move alignment attribute on types for MSVC 00:01:59.379 e2e546ab5b version: 24.07-rc0 00:01:59.379 a9778aad62 version: 24.03.0 00:01:59.394 [Pipeline] } 00:01:59.411 [Pipeline] // stage 00:01:59.420 [Pipeline] stage 00:01:59.422 [Pipeline] { (Prepare) 00:01:59.443 [Pipeline] writeFile 00:01:59.461 [Pipeline] sh 00:01:59.738 + logger -p user.info -t JENKINS-CI 00:01:59.751 [Pipeline] sh 00:02:00.030 + logger -p user.info -t JENKINS-CI 00:02:00.040 [Pipeline] sh 00:02:00.321 + cat autorun-spdk.conf 00:02:00.321 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.321 SPDK_TEST_NVMF=1 00:02:00.321 SPDK_TEST_NVME_CLI=1 00:02:00.321 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:00.321 SPDK_TEST_NVMF_NICS=e810 00:02:00.321 SPDK_TEST_VFIOUSER=1 00:02:00.321 SPDK_RUN_UBSAN=1 00:02:00.321 NET_TYPE=phy 00:02:00.321 SPDK_TEST_NATIVE_DPDK=main 00:02:00.321 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:00.327 RUN_NIGHTLY=1 00:02:00.331 [Pipeline] readFile 00:02:00.351 [Pipeline] withEnv 00:02:00.353 [Pipeline] { 00:02:00.365 [Pipeline] sh 00:02:00.644 + set -ex 00:02:00.644 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:00.644 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:00.644 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.644 ++ SPDK_TEST_NVMF=1 00:02:00.644 ++ SPDK_TEST_NVME_CLI=1 00:02:00.644 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:00.644 ++ SPDK_TEST_NVMF_NICS=e810 00:02:00.644 ++ SPDK_TEST_VFIOUSER=1 00:02:00.644 ++ SPDK_RUN_UBSAN=1 00:02:00.644 ++ NET_TYPE=phy 00:02:00.644 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:00.645 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:00.645 ++ RUN_NIGHTLY=1 00:02:00.645 + case $SPDK_TEST_NVMF_NICS in 00:02:00.645 + DRIVERS=ice 00:02:00.645 + [[ tcp == \r\d\m\a ]] 00:02:00.645 + [[ -n ice ]] 00:02:00.645 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:00.645 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:00.645 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:00.645 rmmod: ERROR: Module irdma is not currently loaded 00:02:00.645 rmmod: ERROR: Module i40iw is not currently loaded 00:02:00.645 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:00.645 + true 00:02:00.645 + for D in $DRIVERS 00:02:00.645 + sudo modprobe ice 00:02:00.645 + exit 0 00:02:00.651 [Pipeline] } 00:02:00.661 [Pipeline] // withEnv 00:02:00.664 [Pipeline] } 00:02:00.676 [Pipeline] // stage 00:02:00.686 [Pipeline] catchError 00:02:00.688 [Pipeline] { 00:02:00.696 [Pipeline] timeout 00:02:00.696 Timeout set to expire in 40 min 00:02:00.697 [Pipeline] { 00:02:00.707 [Pipeline] stage 00:02:00.708 [Pipeline] { (Tests) 00:02:00.719 [Pipeline] sh 00:02:00.997 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:00.997 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:00.997 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:00.997 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:00.997 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:00.997 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:00.997 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:00.997 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:00.997 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:00.997 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:00.997 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:00.997 + source /etc/os-release 00:02:00.997 ++ NAME='Fedora Linux' 00:02:00.997 ++ VERSION='38 (Cloud Edition)' 00:02:00.997 ++ ID=fedora 00:02:00.997 ++ VERSION_ID=38 00:02:00.997 ++ VERSION_CODENAME= 00:02:00.997 ++ PLATFORM_ID=platform:f38 00:02:00.997 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:00.997 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:00.997 ++ LOGO=fedora-logo-icon 00:02:00.997 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:00.997 ++ HOME_URL=https://fedoraproject.org/ 00:02:00.997 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:00.997 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:00.997 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:00.997 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:00.997 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:00.997 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:00.997 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:00.997 ++ SUPPORT_END=2024-05-14 00:02:00.997 ++ VARIANT='Cloud Edition' 00:02:00.997 ++ VARIANT_ID=cloud 00:02:00.997 + uname -a 00:02:00.997 Linux spdk-gp-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:00.997 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:02.369 Hugepages 00:02:02.369 node hugesize free / total 00:02:02.369 node0 1048576kB 0 / 0 00:02:02.369 node0 2048kB 0 / 0 00:02:02.369 node1 1048576kB 0 / 0 00:02:02.369 node1 2048kB 0 / 0 00:02:02.369 00:02:02.369 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:02.369 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:02:02.369 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:02:02.369 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:02:02.369 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:02:02.369 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:02:02.369 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:02:02.369 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:02:02.369 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:02:02.369 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:02.369 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:02:02.369 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:02:02.369 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:02:02.369 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:02:02.369 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:02:02.369 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:02:02.369 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:02:02.369 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:02:02.369 + rm -f /tmp/spdk-ld-path 00:02:02.369 + source autorun-spdk.conf 00:02:02.369 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.369 ++ SPDK_TEST_NVMF=1 00:02:02.369 ++ SPDK_TEST_NVME_CLI=1 00:02:02.369 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:02.369 ++ SPDK_TEST_NVMF_NICS=e810 00:02:02.369 ++ SPDK_TEST_VFIOUSER=1 00:02:02.369 ++ SPDK_RUN_UBSAN=1 00:02:02.369 ++ NET_TYPE=phy 00:02:02.369 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:02.369 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:02.369 ++ RUN_NIGHTLY=1 00:02:02.369 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:02.369 + [[ -n '' ]] 00:02:02.369 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:02.369 + for M in /var/spdk/build-*-manifest.txt 00:02:02.369 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:02.369 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:02.369 + for M in /var/spdk/build-*-manifest.txt 00:02:02.369 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:02.369 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:02.369 ++ uname 00:02:02.369 + [[ Linux == \L\i\n\u\x ]] 00:02:02.369 + sudo dmesg -T 00:02:02.369 + sudo dmesg --clear 00:02:02.369 + dmesg_pid=1064570 00:02:02.369 + [[ Fedora Linux == FreeBSD ]] 00:02:02.369 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:02.369 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:02.369 + sudo dmesg -Tw 00:02:02.369 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:02.369 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:02.369 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:02.369 + [[ -x /usr/src/fio-static/fio ]] 00:02:02.369 + export FIO_BIN=/usr/src/fio-static/fio 00:02:02.369 + FIO_BIN=/usr/src/fio-static/fio 00:02:02.369 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:02.369 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:02.369 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:02.369 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:02.369 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:02.369 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:02.369 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:02.369 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:02.369 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:02.369 Test configuration: 00:02:02.369 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.369 SPDK_TEST_NVMF=1 00:02:02.369 SPDK_TEST_NVME_CLI=1 00:02:02.369 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:02.369 SPDK_TEST_NVMF_NICS=e810 00:02:02.369 SPDK_TEST_VFIOUSER=1 00:02:02.369 SPDK_RUN_UBSAN=1 00:02:02.369 NET_TYPE=phy 00:02:02.369 SPDK_TEST_NATIVE_DPDK=main 00:02:02.369 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:02.369 RUN_NIGHTLY=1 15:20:15 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:02.369 15:20:15 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:02.369 15:20:15 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:02.369 15:20:15 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:02.369 15:20:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.369 15:20:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.369 15:20:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.369 15:20:15 -- paths/export.sh@5 -- $ export PATH 00:02:02.369 15:20:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.369 15:20:15 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:02.369 15:20:15 -- common/autobuild_common.sh@437 -- $ date +%s 00:02:02.369 15:20:15 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715779215.XXXXXX 00:02:02.369 15:20:15 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715779215.j8GVKM 00:02:02.369 15:20:15 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:02:02.369 15:20:15 -- common/autobuild_common.sh@443 -- $ '[' -n main ']' 00:02:02.369 15:20:15 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:02.369 15:20:15 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:02.369 15:20:15 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:02.369 15:20:15 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:02.369 15:20:15 -- common/autobuild_common.sh@453 -- $ get_config_params 00:02:02.369 15:20:15 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:02:02.369 15:20:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.369 15:20:15 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:02.369 15:20:15 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:02:02.369 15:20:15 -- pm/common@17 -- $ local monitor 00:02:02.369 15:20:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.369 15:20:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.369 15:20:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.369 15:20:15 -- pm/common@21 -- $ date +%s 00:02:02.369 15:20:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.369 15:20:15 -- pm/common@21 -- $ date +%s 00:02:02.369 15:20:15 -- pm/common@25 -- $ sleep 1 00:02:02.369 15:20:15 -- pm/common@21 -- $ date +%s 00:02:02.369 15:20:15 -- pm/common@21 -- $ date +%s 00:02:02.369 15:20:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715779215 00:02:02.369 15:20:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715779215 00:02:02.369 15:20:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715779215 00:02:02.370 15:20:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715779215 00:02:02.370 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715779215_collect-vmstat.pm.log 00:02:02.370 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715779215_collect-cpu-load.pm.log 00:02:02.370 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715779215_collect-cpu-temp.pm.log 00:02:02.370 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715779215_collect-bmc-pm.bmc.pm.log 00:02:03.302 15:20:16 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:02:03.302 15:20:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:03.302 15:20:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:03.302 15:20:16 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:03.302 15:20:16 -- spdk/autobuild.sh@16 -- $ date -u 00:02:03.302 Wed May 15 01:20:16 PM UTC 2024 00:02:03.302 15:20:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:03.302 v24.05-pre-662-g253cca4fc 00:02:03.302 15:20:16 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:03.302 15:20:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:03.302 15:20:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:03.302 15:20:16 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:03.302 15:20:16 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:03.302 15:20:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.302 ************************************ 00:02:03.302 START TEST ubsan 00:02:03.302 ************************************ 00:02:03.302 15:20:16 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:02:03.302 using ubsan 00:02:03.302 00:02:03.302 real 0m0.000s 00:02:03.302 user 0m0.000s 00:02:03.302 sys 0m0.000s 00:02:03.302 15:20:16 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:03.302 15:20:16 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:03.302 ************************************ 00:02:03.302 END TEST ubsan 00:02:03.302 ************************************ 00:02:03.560 15:20:16 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:02:03.560 15:20:16 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:03.560 15:20:16 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:03.560 15:20:16 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:02:03.560 15:20:16 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:03.560 15:20:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.560 ************************************ 00:02:03.560 START TEST build_native_dpdk 00:02:03.560 ************************************ 00:02:03.560 15:20:16 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:03.560 7e06c0de19 examples: move alignment attribute on types for MSVC 00:02:03.560 27595cd830 drivers: move alignment attribute on types for MSVC 00:02:03.560 0efea35a2b app: move alignment attribute on types for MSVC 00:02:03.560 e2e546ab5b version: 24.07-rc0 00:02:03.560 a9778aad62 version: 24.03.0 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc0 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc0 21.11.0 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc0 '<' 21.11.0 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:03.560 15:20:16 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:03.560 patching file config/rte_config.h 00:02:03.560 Hunk #1 succeeded at 70 (offset 11 lines). 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:03.560 15:20:16 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:07.744 The Meson build system 00:02:07.744 Version: 1.3.1 00:02:07.744 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:07.744 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:07.744 Build type: native build 00:02:07.744 Program cat found: YES (/usr/bin/cat) 00:02:07.744 Project name: DPDK 00:02:07.744 Project version: 24.07.0-rc0 00:02:07.744 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:07.744 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:07.744 Host machine cpu family: x86_64 00:02:07.744 Host machine cpu: x86_64 00:02:07.744 Message: ## Building in Developer Mode ## 00:02:07.744 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:07.744 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:07.744 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:07.744 Program python3 found: YES (/usr/bin/python3) 00:02:07.744 Program cat found: YES (/usr/bin/cat) 00:02:07.744 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:07.744 Compiler for C supports arguments -march=native: YES 00:02:07.744 Checking for size of "void *" : 8 00:02:07.744 Checking for size of "void *" : 8 (cached) 00:02:07.744 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:07.744 Library m found: YES 00:02:07.744 Library numa found: YES 00:02:07.744 Has header "numaif.h" : YES 00:02:07.744 Library fdt found: NO 00:02:07.744 Library execinfo found: NO 00:02:07.744 Has header "execinfo.h" : YES 00:02:07.744 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:07.744 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:07.744 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:07.744 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:07.744 Run-time dependency openssl found: YES 3.0.9 00:02:07.744 Run-time dependency libpcap found: YES 1.10.4 00:02:07.744 Has header "pcap.h" with dependency libpcap: YES 00:02:07.744 Compiler for C supports arguments -Wcast-qual: YES 00:02:07.744 Compiler for C supports arguments -Wdeprecated: YES 00:02:07.744 Compiler for C supports arguments -Wformat: YES 00:02:07.744 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:07.744 Compiler for C supports arguments -Wformat-security: NO 00:02:07.744 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:07.744 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:07.744 Compiler for C supports arguments -Wnested-externs: YES 00:02:07.744 Compiler for C supports arguments -Wold-style-definition: YES 00:02:07.744 Compiler for C supports arguments -Wpointer-arith: YES 00:02:07.744 Compiler for C supports arguments -Wsign-compare: YES 00:02:07.744 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:07.744 Compiler for C supports arguments -Wundef: YES 00:02:07.744 Compiler for C supports arguments -Wwrite-strings: YES 00:02:07.744 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:07.744 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:07.744 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:07.744 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:07.745 Program objdump found: YES (/usr/bin/objdump) 00:02:07.745 Compiler for C supports arguments -mavx512f: YES 00:02:07.745 Checking if "AVX512 checking" compiles: YES 00:02:07.745 Fetching value of define "__SSE4_2__" : 1 00:02:07.745 Fetching value of define "__AES__" : 1 00:02:07.745 Fetching value of define "__AVX__" : 1 00:02:07.745 Fetching value of define "__AVX2__" : (undefined) 00:02:07.745 Fetching value of define "__AVX512BW__" : (undefined) 00:02:07.745 Fetching value of define "__AVX512CD__" : (undefined) 00:02:07.745 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:07.745 Fetching value of define "__AVX512F__" : (undefined) 00:02:07.745 Fetching value of define "__AVX512VL__" : (undefined) 00:02:07.745 Fetching value of define "__PCLMUL__" : 1 00:02:07.745 Fetching value of define "__RDRND__" : 1 00:02:07.745 Fetching value of define "__RDSEED__" : (undefined) 00:02:07.745 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:07.745 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:07.745 Message: lib/log: Defining dependency "log" 00:02:07.745 Message: lib/kvargs: Defining dependency "kvargs" 00:02:07.745 Message: lib/argparse: Defining dependency "argparse" 00:02:07.745 Message: lib/telemetry: Defining dependency "telemetry" 00:02:07.745 Checking for function "getentropy" : NO 00:02:07.745 Message: lib/eal: Defining dependency "eal" 00:02:07.745 Message: lib/ring: Defining dependency "ring" 00:02:07.745 Message: lib/rcu: Defining dependency "rcu" 00:02:07.745 Message: lib/mempool: Defining dependency "mempool" 00:02:07.745 Message: lib/mbuf: Defining dependency "mbuf" 00:02:07.745 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:07.745 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:07.745 Compiler for C supports arguments -mpclmul: YES 00:02:07.745 Compiler for C supports arguments -maes: YES 00:02:07.745 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:07.745 Compiler for C supports arguments -mavx512bw: YES 00:02:07.745 Compiler for C supports arguments -mavx512dq: YES 00:02:07.745 Compiler for C supports arguments -mavx512vl: YES 00:02:07.745 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:07.745 Compiler for C supports arguments -mavx2: YES 00:02:07.745 Compiler for C supports arguments -mavx: YES 00:02:07.745 Message: lib/net: Defining dependency "net" 00:02:07.745 Message: lib/meter: Defining dependency "meter" 00:02:07.745 Message: lib/ethdev: Defining dependency "ethdev" 00:02:07.745 Message: lib/pci: Defining dependency "pci" 00:02:07.745 Message: lib/cmdline: Defining dependency "cmdline" 00:02:07.745 Message: lib/metrics: Defining dependency "metrics" 00:02:07.745 Message: lib/hash: Defining dependency "hash" 00:02:07.745 Message: lib/timer: Defining dependency "timer" 00:02:07.745 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:07.745 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:07.745 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:07.745 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:07.745 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:07.745 Message: lib/acl: Defining dependency "acl" 00:02:07.745 Message: lib/bbdev: Defining dependency "bbdev" 00:02:07.745 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:07.745 Run-time dependency libelf found: YES 0.190 00:02:07.745 Message: lib/bpf: Defining dependency "bpf" 00:02:07.745 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:07.745 Message: lib/compressdev: Defining dependency "compressdev" 00:02:07.745 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:07.745 Message: lib/distributor: Defining dependency "distributor" 00:02:07.745 Message: lib/dmadev: Defining dependency "dmadev" 00:02:07.745 Message: lib/efd: Defining dependency "efd" 00:02:07.745 Message: lib/eventdev: Defining dependency "eventdev" 00:02:07.745 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:07.745 Message: lib/gpudev: Defining dependency "gpudev" 00:02:07.745 Message: lib/gro: Defining dependency "gro" 00:02:07.745 Message: lib/gso: Defining dependency "gso" 00:02:07.745 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:07.745 Message: lib/jobstats: Defining dependency "jobstats" 00:02:07.745 Message: lib/latencystats: Defining dependency "latencystats" 00:02:07.745 Message: lib/lpm: Defining dependency "lpm" 00:02:07.745 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:07.745 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:07.745 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:07.745 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:07.745 Message: lib/member: Defining dependency "member" 00:02:07.745 Message: lib/pcapng: Defining dependency "pcapng" 00:02:07.745 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:07.745 Message: lib/power: Defining dependency "power" 00:02:07.745 Message: lib/rawdev: Defining dependency "rawdev" 00:02:07.745 Message: lib/regexdev: Defining dependency "regexdev" 00:02:07.745 Message: lib/mldev: Defining dependency "mldev" 00:02:07.745 Message: lib/rib: Defining dependency "rib" 00:02:07.745 Message: lib/reorder: Defining dependency "reorder" 00:02:07.745 Message: lib/sched: Defining dependency "sched" 00:02:07.745 Message: lib/security: Defining dependency "security" 00:02:07.745 Message: lib/stack: Defining dependency "stack" 00:02:07.745 Has header "linux/userfaultfd.h" : YES 00:02:07.745 Has header "linux/vduse.h" : YES 00:02:07.745 Message: lib/vhost: Defining dependency "vhost" 00:02:07.745 Message: lib/ipsec: Defining dependency "ipsec" 00:02:07.745 Message: lib/pdcp: Defining dependency "pdcp" 00:02:07.745 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:07.745 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:07.745 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:07.745 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:07.745 Message: lib/fib: Defining dependency "fib" 00:02:07.745 Message: lib/port: Defining dependency "port" 00:02:07.745 Message: lib/pdump: Defining dependency "pdump" 00:02:07.745 Message: lib/table: Defining dependency "table" 00:02:07.745 Message: lib/pipeline: Defining dependency "pipeline" 00:02:07.745 Message: lib/graph: Defining dependency "graph" 00:02:07.745 Message: lib/node: Defining dependency "node" 00:02:07.745 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:09.651 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:09.651 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:09.651 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:09.651 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:09.651 Compiler for C supports arguments -Wno-unused-value: YES 00:02:09.651 Compiler for C supports arguments -Wno-format: YES 00:02:09.651 Compiler for C supports arguments -Wno-format-security: YES 00:02:09.651 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:09.651 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:09.651 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:09.651 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:09.651 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:09.651 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:09.651 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:09.651 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:09.651 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:09.651 Has header "sys/epoll.h" : YES 00:02:09.651 Program doxygen found: YES (/usr/bin/doxygen) 00:02:09.651 Configuring doxy-api-html.conf using configuration 00:02:09.651 Configuring doxy-api-man.conf using configuration 00:02:09.651 Program mandb found: YES (/usr/bin/mandb) 00:02:09.651 Program sphinx-build found: NO 00:02:09.651 Configuring rte_build_config.h using configuration 00:02:09.651 Message: 00:02:09.651 ================= 00:02:09.651 Applications Enabled 00:02:09.651 ================= 00:02:09.651 00:02:09.651 apps: 00:02:09.651 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:09.651 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:09.651 test-pmd, test-regex, test-sad, test-security-perf, 00:02:09.651 00:02:09.651 Message: 00:02:09.651 ================= 00:02:09.651 Libraries Enabled 00:02:09.651 ================= 00:02:09.651 00:02:09.651 libs: 00:02:09.651 log, kvargs, argparse, telemetry, eal, ring, rcu, mempool, 00:02:09.651 mbuf, net, meter, ethdev, pci, cmdline, metrics, hash, 00:02:09.651 timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, 00:02:09.651 distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, 00:02:09.651 ip_frag, jobstats, latencystats, lpm, member, pcapng, power, rawdev, 00:02:09.651 regexdev, mldev, rib, reorder, sched, security, stack, vhost, 00:02:09.651 ipsec, pdcp, fib, port, pdump, table, pipeline, graph, 00:02:09.651 node, 00:02:09.651 00:02:09.651 Message: 00:02:09.651 =============== 00:02:09.651 Drivers Enabled 00:02:09.651 =============== 00:02:09.651 00:02:09.651 common: 00:02:09.651 00:02:09.651 bus: 00:02:09.651 pci, vdev, 00:02:09.651 mempool: 00:02:09.651 ring, 00:02:09.651 dma: 00:02:09.651 00:02:09.651 net: 00:02:09.651 i40e, 00:02:09.651 raw: 00:02:09.651 00:02:09.651 crypto: 00:02:09.651 00:02:09.651 compress: 00:02:09.651 00:02:09.651 regex: 00:02:09.651 00:02:09.651 ml: 00:02:09.651 00:02:09.651 vdpa: 00:02:09.651 00:02:09.651 event: 00:02:09.651 00:02:09.651 baseband: 00:02:09.651 00:02:09.651 gpu: 00:02:09.651 00:02:09.651 00:02:09.651 Message: 00:02:09.651 ================= 00:02:09.651 Content Skipped 00:02:09.651 ================= 00:02:09.651 00:02:09.651 apps: 00:02:09.651 00:02:09.651 libs: 00:02:09.651 00:02:09.651 drivers: 00:02:09.651 common/cpt: not in enabled drivers build config 00:02:09.651 common/dpaax: not in enabled drivers build config 00:02:09.651 common/iavf: not in enabled drivers build config 00:02:09.651 common/idpf: not in enabled drivers build config 00:02:09.651 common/ionic: not in enabled drivers build config 00:02:09.651 common/mvep: not in enabled drivers build config 00:02:09.651 common/octeontx: not in enabled drivers build config 00:02:09.651 bus/auxiliary: not in enabled drivers build config 00:02:09.651 bus/cdx: not in enabled drivers build config 00:02:09.651 bus/dpaa: not in enabled drivers build config 00:02:09.651 bus/fslmc: not in enabled drivers build config 00:02:09.651 bus/ifpga: not in enabled drivers build config 00:02:09.651 bus/platform: not in enabled drivers build config 00:02:09.651 bus/uacce: not in enabled drivers build config 00:02:09.651 bus/vmbus: not in enabled drivers build config 00:02:09.651 common/cnxk: not in enabled drivers build config 00:02:09.651 common/mlx5: not in enabled drivers build config 00:02:09.651 common/nfp: not in enabled drivers build config 00:02:09.651 common/nitrox: not in enabled drivers build config 00:02:09.651 common/qat: not in enabled drivers build config 00:02:09.651 common/sfc_efx: not in enabled drivers build config 00:02:09.651 mempool/bucket: not in enabled drivers build config 00:02:09.651 mempool/cnxk: not in enabled drivers build config 00:02:09.651 mempool/dpaa: not in enabled drivers build config 00:02:09.651 mempool/dpaa2: not in enabled drivers build config 00:02:09.651 mempool/octeontx: not in enabled drivers build config 00:02:09.651 mempool/stack: not in enabled drivers build config 00:02:09.651 dma/cnxk: not in enabled drivers build config 00:02:09.651 dma/dpaa: not in enabled drivers build config 00:02:09.651 dma/dpaa2: not in enabled drivers build config 00:02:09.651 dma/hisilicon: not in enabled drivers build config 00:02:09.651 dma/idxd: not in enabled drivers build config 00:02:09.651 dma/ioat: not in enabled drivers build config 00:02:09.651 dma/skeleton: not in enabled drivers build config 00:02:09.651 net/af_packet: not in enabled drivers build config 00:02:09.651 net/af_xdp: not in enabled drivers build config 00:02:09.651 net/ark: not in enabled drivers build config 00:02:09.651 net/atlantic: not in enabled drivers build config 00:02:09.651 net/avp: not in enabled drivers build config 00:02:09.651 net/axgbe: not in enabled drivers build config 00:02:09.651 net/bnx2x: not in enabled drivers build config 00:02:09.651 net/bnxt: not in enabled drivers build config 00:02:09.651 net/bonding: not in enabled drivers build config 00:02:09.651 net/cnxk: not in enabled drivers build config 00:02:09.651 net/cpfl: not in enabled drivers build config 00:02:09.651 net/cxgbe: not in enabled drivers build config 00:02:09.651 net/dpaa: not in enabled drivers build config 00:02:09.651 net/dpaa2: not in enabled drivers build config 00:02:09.651 net/e1000: not in enabled drivers build config 00:02:09.651 net/ena: not in enabled drivers build config 00:02:09.651 net/enetc: not in enabled drivers build config 00:02:09.651 net/enetfec: not in enabled drivers build config 00:02:09.651 net/enic: not in enabled drivers build config 00:02:09.651 net/failsafe: not in enabled drivers build config 00:02:09.651 net/fm10k: not in enabled drivers build config 00:02:09.651 net/gve: not in enabled drivers build config 00:02:09.651 net/hinic: not in enabled drivers build config 00:02:09.651 net/hns3: not in enabled drivers build config 00:02:09.651 net/iavf: not in enabled drivers build config 00:02:09.651 net/ice: not in enabled drivers build config 00:02:09.651 net/idpf: not in enabled drivers build config 00:02:09.651 net/igc: not in enabled drivers build config 00:02:09.651 net/ionic: not in enabled drivers build config 00:02:09.651 net/ipn3ke: not in enabled drivers build config 00:02:09.651 net/ixgbe: not in enabled drivers build config 00:02:09.651 net/mana: not in enabled drivers build config 00:02:09.651 net/memif: not in enabled drivers build config 00:02:09.651 net/mlx4: not in enabled drivers build config 00:02:09.651 net/mlx5: not in enabled drivers build config 00:02:09.651 net/mvneta: not in enabled drivers build config 00:02:09.651 net/mvpp2: not in enabled drivers build config 00:02:09.651 net/netvsc: not in enabled drivers build config 00:02:09.651 net/nfb: not in enabled drivers build config 00:02:09.651 net/nfp: not in enabled drivers build config 00:02:09.651 net/ngbe: not in enabled drivers build config 00:02:09.652 net/null: not in enabled drivers build config 00:02:09.652 net/octeontx: not in enabled drivers build config 00:02:09.652 net/octeon_ep: not in enabled drivers build config 00:02:09.652 net/pcap: not in enabled drivers build config 00:02:09.652 net/pfe: not in enabled drivers build config 00:02:09.652 net/qede: not in enabled drivers build config 00:02:09.652 net/ring: not in enabled drivers build config 00:02:09.652 net/sfc: not in enabled drivers build config 00:02:09.652 net/softnic: not in enabled drivers build config 00:02:09.652 net/tap: not in enabled drivers build config 00:02:09.652 net/thunderx: not in enabled drivers build config 00:02:09.652 net/txgbe: not in enabled drivers build config 00:02:09.652 net/vdev_netvsc: not in enabled drivers build config 00:02:09.652 net/vhost: not in enabled drivers build config 00:02:09.652 net/virtio: not in enabled drivers build config 00:02:09.652 net/vmxnet3: not in enabled drivers build config 00:02:09.652 raw/cnxk_bphy: not in enabled drivers build config 00:02:09.652 raw/cnxk_gpio: not in enabled drivers build config 00:02:09.652 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:09.652 raw/ifpga: not in enabled drivers build config 00:02:09.652 raw/ntb: not in enabled drivers build config 00:02:09.652 raw/skeleton: not in enabled drivers build config 00:02:09.652 crypto/armv8: not in enabled drivers build config 00:02:09.652 crypto/bcmfs: not in enabled drivers build config 00:02:09.652 crypto/caam_jr: not in enabled drivers build config 00:02:09.652 crypto/ccp: not in enabled drivers build config 00:02:09.652 crypto/cnxk: not in enabled drivers build config 00:02:09.652 crypto/dpaa_sec: not in enabled drivers build config 00:02:09.652 crypto/dpaa2_sec: not in enabled drivers build config 00:02:09.652 crypto/ipsec_mb: not in enabled drivers build config 00:02:09.652 crypto/mlx5: not in enabled drivers build config 00:02:09.652 crypto/mvsam: not in enabled drivers build config 00:02:09.652 crypto/nitrox: not in enabled drivers build config 00:02:09.652 crypto/null: not in enabled drivers build config 00:02:09.652 crypto/octeontx: not in enabled drivers build config 00:02:09.652 crypto/openssl: not in enabled drivers build config 00:02:09.652 crypto/scheduler: not in enabled drivers build config 00:02:09.652 crypto/uadk: not in enabled drivers build config 00:02:09.652 crypto/virtio: not in enabled drivers build config 00:02:09.652 compress/isal: not in enabled drivers build config 00:02:09.652 compress/mlx5: not in enabled drivers build config 00:02:09.652 compress/nitrox: not in enabled drivers build config 00:02:09.652 compress/octeontx: not in enabled drivers build config 00:02:09.652 compress/zlib: not in enabled drivers build config 00:02:09.652 regex/mlx5: not in enabled drivers build config 00:02:09.652 regex/cn9k: not in enabled drivers build config 00:02:09.652 ml/cnxk: not in enabled drivers build config 00:02:09.652 vdpa/ifc: not in enabled drivers build config 00:02:09.652 vdpa/mlx5: not in enabled drivers build config 00:02:09.652 vdpa/nfp: not in enabled drivers build config 00:02:09.652 vdpa/sfc: not in enabled drivers build config 00:02:09.652 event/cnxk: not in enabled drivers build config 00:02:09.652 event/dlb2: not in enabled drivers build config 00:02:09.652 event/dpaa: not in enabled drivers build config 00:02:09.652 event/dpaa2: not in enabled drivers build config 00:02:09.652 event/dsw: not in enabled drivers build config 00:02:09.652 event/opdl: not in enabled drivers build config 00:02:09.652 event/skeleton: not in enabled drivers build config 00:02:09.652 event/sw: not in enabled drivers build config 00:02:09.652 event/octeontx: not in enabled drivers build config 00:02:09.652 baseband/acc: not in enabled drivers build config 00:02:09.652 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:09.652 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:09.652 baseband/la12xx: not in enabled drivers build config 00:02:09.652 baseband/null: not in enabled drivers build config 00:02:09.652 baseband/turbo_sw: not in enabled drivers build config 00:02:09.652 gpu/cuda: not in enabled drivers build config 00:02:09.652 00:02:09.652 00:02:09.652 Build targets in project: 224 00:02:09.652 00:02:09.652 DPDK 24.07.0-rc0 00:02:09.652 00:02:09.652 User defined options 00:02:09.652 libdir : lib 00:02:09.652 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:09.652 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:09.652 c_link_args : 00:02:09.652 enable_docs : false 00:02:09.652 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:09.652 enable_kmods : false 00:02:09.652 machine : native 00:02:09.652 tests : false 00:02:09.652 00:02:09.652 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:09.652 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:09.652 15:20:22 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:02:09.652 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:09.652 [1/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:09.652 [2/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:09.652 [3/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:09.652 [4/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:09.652 [5/722] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:09.652 [6/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:09.652 [7/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:09.652 [8/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:09.652 [9/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:09.652 [10/722] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:09.652 [11/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:09.652 [12/722] Linking static target lib/librte_kvargs.a 00:02:09.652 [13/722] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:09.652 [14/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:09.910 [15/722] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:09.910 [16/722] Linking static target lib/librte_log.a 00:02:09.910 [17/722] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:02:09.910 [18/722] Linking static target lib/librte_argparse.a 00:02:10.170 [19/722] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.432 [20/722] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.731 [21/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:10.731 [22/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:10.731 [23/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:10.731 [24/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:10.731 [25/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:10.731 [26/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:10.731 [27/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:10.731 [28/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:10.731 [29/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:10.731 [30/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:10.731 [31/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:10.731 [32/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:10.731 [33/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:10.731 [34/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:10.731 [35/722] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:10.731 [36/722] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.731 [37/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:10.731 [38/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:10.731 [39/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:10.731 [40/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:10.731 [41/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:10.731 [42/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:10.731 [43/722] Linking target lib/librte_log.so.24.2 00:02:10.731 [44/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:10.731 [45/722] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:10.731 [46/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:10.731 [47/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:10.731 [48/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:10.731 [49/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:10.731 [50/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:10.731 [51/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:10.995 [52/722] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:10.995 [53/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:10.995 [54/722] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:10.995 [55/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:10.995 [56/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:10.996 [57/722] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:10.996 [58/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:10.996 [59/722] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:02:10.996 [60/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:10.996 [61/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:10.996 [62/722] Linking target lib/librte_kvargs.so.24.2 00:02:10.996 [63/722] Linking target lib/librte_argparse.so.24.2 00:02:11.256 [64/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:11.256 [65/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:11.256 [66/722] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:02:11.256 [67/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:11.256 [68/722] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:11.256 [69/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:11.517 [70/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:11.517 [71/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:11.517 [72/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:11.779 [73/722] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:11.779 [74/722] Linking static target lib/librte_pci.a 00:02:11.779 [75/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:11.779 [76/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:11.779 [77/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:11.779 [78/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:11.779 [79/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:11.779 [80/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:12.038 [81/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:12.038 [82/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:12.038 [83/722] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:12.039 [84/722] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:12.039 [85/722] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:12.039 [86/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:12.039 [87/722] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:12.039 [88/722] Linking static target lib/librte_ring.a 00:02:12.039 [89/722] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:12.039 [90/722] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:12.039 [91/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:12.039 [92/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:12.039 [93/722] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:12.039 [94/722] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.039 [95/722] Linking static target lib/librte_meter.a 00:02:12.039 [96/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:12.039 [97/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:12.039 [98/722] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:12.039 [99/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:12.039 [100/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:12.039 [101/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:12.039 [102/722] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:12.039 [103/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:12.039 [104/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:12.039 [105/722] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:12.298 [106/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:12.298 [107/722] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:12.298 [108/722] Linking static target lib/librte_telemetry.a 00:02:12.298 [109/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:12.298 [110/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:12.298 [111/722] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:12.298 [112/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:12.298 [113/722] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:12.298 [114/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:12.564 [115/722] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.564 [116/722] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.564 [117/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:12.564 [118/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:12.564 [119/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:12.564 [120/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:12.564 [121/722] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:12.564 [122/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:12.564 [123/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:12.564 [124/722] Linking static target lib/librte_net.a 00:02:12.822 [125/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:12.822 [126/722] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:12.822 [127/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:12.822 [128/722] Linking static target lib/librte_mempool.a 00:02:12.822 [129/722] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.822 [130/722] Linking target lib/librte_telemetry.so.24.2 00:02:12.822 [131/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:12.822 [132/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:13.085 [133/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:13.085 [134/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:13.085 [135/722] Linking static target lib/librte_eal.a 00:02:13.085 [136/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:13.085 [137/722] Linking static target lib/librte_cmdline.a 00:02:13.085 [138/722] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.085 [139/722] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:13.085 [140/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:13.085 [141/722] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:02:13.085 [142/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:13.347 [143/722] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:13.347 [144/722] Linking static target lib/librte_cfgfile.a 00:02:13.347 [145/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:13.347 [146/722] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:13.347 [147/722] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:13.348 [148/722] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:13.348 [149/722] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:13.348 [150/722] Linking static target lib/librte_metrics.a 00:02:13.348 [151/722] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:13.348 [152/722] Linking static target lib/librte_rcu.a 00:02:13.608 [153/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:13.608 [154/722] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:13.608 [155/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:13.608 [156/722] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:13.608 [157/722] Linking static target lib/librte_bitratestats.a 00:02:13.608 [158/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:13.608 [159/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:13.868 [160/722] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.868 [161/722] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:13.868 [162/722] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.868 [163/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:13.868 [164/722] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:13.868 [165/722] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.868 [166/722] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.868 [167/722] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:13.868 [168/722] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:13.868 [169/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:13.868 [170/722] Linking static target lib/librte_timer.a 00:02:13.868 [171/722] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.130 [172/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:14.130 [173/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:14.130 [174/722] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:14.130 [175/722] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:14.130 [176/722] Linking static target lib/librte_bbdev.a 00:02:14.130 [177/722] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:14.388 [178/722] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.388 [179/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:14.388 [180/722] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:14.388 [181/722] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:14.650 [182/722] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.650 [183/722] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:14.650 [184/722] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:14.650 [185/722] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:14.650 [186/722] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:14.650 [187/722] Linking static target lib/librte_compressdev.a 00:02:14.650 [188/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:14.917 [189/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:14.917 [190/722] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:14.917 [191/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:15.176 [192/722] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:15.176 [193/722] Linking static target lib/librte_distributor.a 00:02:15.176 [194/722] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:15.176 [195/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:15.176 [196/722] Linking static target lib/librte_dmadev.a 00:02:15.176 [197/722] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.176 [198/722] Linking static target lib/librte_bpf.a 00:02:15.176 [199/722] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:15.436 [200/722] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:15.436 [201/722] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:15.436 [202/722] Linking static target lib/librte_dispatcher.a 00:02:15.436 [203/722] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.436 [204/722] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:15.436 [205/722] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:15.436 [206/722] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:15.436 [207/722] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:15.436 [208/722] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:15.436 [209/722] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:15.699 [210/722] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.699 [211/722] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:15.699 [212/722] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:15.699 [213/722] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:15.699 [214/722] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:15.699 [215/722] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:15.699 [216/722] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:15.699 [217/722] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:15.699 [218/722] Linking static target lib/librte_gpudev.a 00:02:15.699 [219/722] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.699 [220/722] Linking static target lib/librte_gro.a 00:02:15.699 [221/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:15.699 [222/722] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:15.961 [223/722] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:15.961 [224/722] Linking static target lib/librte_jobstats.a 00:02:15.961 [225/722] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:15.961 [226/722] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.961 [227/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:16.223 [228/722] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.223 [229/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:16.223 [230/722] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.223 [231/722] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:16.223 [232/722] Linking static target lib/librte_latencystats.a 00:02:16.485 [233/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:16.485 [234/722] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.485 [235/722] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:16.485 [236/722] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:16.485 [237/722] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:16.485 [238/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:16.485 [239/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:16.485 [240/722] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:16.485 [241/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:16.743 [242/722] Linking static target lib/librte_ip_frag.a 00:02:16.743 [243/722] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:16.743 [244/722] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.743 [245/722] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:16.743 [246/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:16.743 [247/722] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:16.743 [248/722] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:17.008 [249/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:17.008 [250/722] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:17.008 [251/722] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:17.268 [252/722] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.268 [253/722] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:17.268 [254/722] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.268 [255/722] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:17.268 [256/722] Linking static target lib/librte_gso.a 00:02:17.268 [257/722] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:17.268 [258/722] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:17.268 [259/722] Linking static target lib/librte_regexdev.a 00:02:17.530 [260/722] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:17.530 [261/722] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:17.530 [262/722] Linking static target lib/librte_rawdev.a 00:02:17.530 [263/722] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:17.530 [264/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:17.530 [265/722] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:17.530 [266/722] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:17.530 [267/722] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:17.530 [268/722] Linking static target lib/librte_efd.a 00:02:17.530 [269/722] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.530 [270/722] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:17.795 [271/722] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:17.795 [272/722] Linking static target lib/librte_pcapng.a 00:02:17.795 [273/722] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:17.795 [274/722] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:17.795 [275/722] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:17.795 [276/722] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:17.795 [277/722] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:17.796 [278/722] Linking static target lib/librte_mldev.a 00:02:17.796 [279/722] Linking static target lib/librte_stack.a 00:02:17.796 [280/722] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:17.796 [281/722] Linking static target lib/librte_lpm.a 00:02:18.057 [282/722] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.057 [283/722] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:18.057 [284/722] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:18.057 [285/722] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:18.057 [286/722] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:18.057 [287/722] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.057 [288/722] Linking static target lib/acl/libavx2_tmp.a 00:02:18.057 [289/722] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:18.057 [290/722] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.323 [291/722] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:18.323 [292/722] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.323 [293/722] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:02:18.323 [294/722] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:18.323 [295/722] Linking static target lib/librte_hash.a 00:02:18.323 [296/722] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:18.323 [297/722] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:18.323 [298/722] Linking static target lib/librte_power.a 00:02:18.323 [299/722] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:18.323 [300/722] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:18.323 [301/722] Linking static target lib/librte_reorder.a 00:02:18.590 [302/722] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:18.590 [303/722] Linking static target lib/acl/libavx512_tmp.a 00:02:18.590 [304/722] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.590 [305/722] Linking static target lib/librte_acl.a 00:02:18.590 [306/722] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:18.590 [307/722] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.590 [308/722] Linking static target lib/librte_security.a 00:02:18.590 [309/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:18.590 [310/722] Linking static target lib/librte_mbuf.a 00:02:18.854 [311/722] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:18.854 [312/722] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:18.854 [313/722] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:18.854 [314/722] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:18.854 [315/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:18.854 [316/722] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:18.854 [317/722] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.854 [318/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:19.114 [319/722] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:19.114 [320/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:19.114 [321/722] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.114 [322/722] Linking static target lib/librte_rib.a 00:02:19.114 [323/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:19.114 [324/722] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:19.378 [325/722] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.378 [326/722] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:19.378 [327/722] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.378 [328/722] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:19.378 [329/722] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:19.378 [330/722] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:19.378 [331/722] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:19.378 [332/722] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:19.378 [333/722] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.378 [334/722] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:19.639 [335/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:19.639 [336/722] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.639 [337/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:19.900 [338/722] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.900 [339/722] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:19.900 [340/722] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:02:19.900 [341/722] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:20.161 [342/722] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:20.161 [343/722] Linking static target lib/librte_member.a 00:02:20.161 [344/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:20.161 [345/722] Linking static target lib/librte_eventdev.a 00:02:20.161 [346/722] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:20.161 [347/722] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:20.161 [348/722] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.421 [349/722] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:20.421 [350/722] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:20.421 [351/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:20.421 [352/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:20.421 [353/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:20.421 [354/722] Linking static target lib/librte_ethdev.a 00:02:20.421 [355/722] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:20.421 [356/722] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:20.421 [357/722] Linking static target lib/librte_cryptodev.a 00:02:20.682 [358/722] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.682 [359/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:20.682 [360/722] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:20.682 [361/722] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:20.682 [362/722] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:20.682 [363/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:20.682 [364/722] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:20.682 [365/722] Linking static target lib/librte_sched.a 00:02:20.682 [366/722] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:20.682 [367/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:20.682 [368/722] Linking static target lib/librte_fib.a 00:02:20.942 [369/722] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:20.942 [370/722] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:20.942 [371/722] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:20.942 [372/722] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:20.942 [373/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:20.942 [374/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:21.209 [375/722] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:21.209 [376/722] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:21.209 [377/722] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.209 [378/722] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:21.209 [379/722] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.209 [380/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:21.470 [381/722] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:21.470 [382/722] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:21.730 [383/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:21.730 [384/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:21.730 [385/722] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:21.730 [386/722] Linking static target lib/librte_pdump.a 00:02:21.730 [387/722] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:21.730 [388/722] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:21.730 [389/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:21.730 [390/722] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:21.992 [391/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:21.992 [392/722] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:21.992 [393/722] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:21.992 [394/722] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:21.992 [395/722] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:21.992 [396/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:21.992 [397/722] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:21.992 [398/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:22.253 [399/722] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:22.253 [400/722] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.253 [401/722] Linking static target lib/librte_ipsec.a 00:02:22.253 [402/722] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:22.253 [403/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:22.253 [404/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:22.253 [405/722] Linking static target lib/librte_table.a 00:02:22.515 [406/722] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:22.515 [407/722] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:22.515 [408/722] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.778 [409/722] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:22.778 [410/722] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.778 [411/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:23.040 [412/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:23.040 [413/722] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:23.040 [414/722] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:23.040 [415/722] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:23.303 [416/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:23.303 [417/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:23.303 [418/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:23.303 [419/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:23.303 [420/722] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:23.303 [421/722] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:23.303 [422/722] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:23.561 [423/722] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:02:23.561 [424/722] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.561 [425/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:23.561 [426/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:23.561 [427/722] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.561 [428/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:23.561 [429/722] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:23.825 [430/722] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:23.825 [431/722] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.825 [432/722] Linking static target drivers/librte_bus_vdev.a 00:02:23.825 [433/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:23.825 [434/722] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.825 [435/722] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:23.825 [436/722] Linking static target lib/librte_port.a 00:02:24.088 [437/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:24.088 [438/722] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:24.088 [439/722] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:24.088 [440/722] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:24.088 [441/722] Linking static target drivers/librte_bus_pci.a 00:02:24.088 [442/722] Linking static target lib/librte_graph.a 00:02:24.354 [443/722] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:24.354 [444/722] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:24.354 [445/722] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.354 [446/722] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:24.354 [447/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:24.354 [448/722] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.618 [449/722] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:24.618 [450/722] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:24.618 [451/722] Linking target lib/librte_eal.so.24.2 00:02:24.877 [452/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:24.877 [453/722] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:24.877 [454/722] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.877 [455/722] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:02:24.877 [456/722] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:24.877 [457/722] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:24.877 [458/722] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:24.877 [459/722] Linking target lib/librte_ring.so.24.2 00:02:24.877 [460/722] Linking target lib/librte_meter.so.24.2 00:02:24.877 [461/722] Linking target lib/librte_pci.so.24.2 00:02:25.140 [462/722] Linking target lib/librte_timer.so.24.2 00:02:25.140 [463/722] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.140 [464/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:25.140 [465/722] Linking target lib/librte_acl.so.24.2 00:02:25.140 [466/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:25.140 [467/722] Linking target lib/librte_cfgfile.so.24.2 00:02:25.140 [468/722] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:02:25.140 [469/722] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:02:25.140 [470/722] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:25.140 [471/722] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:02:25.140 [472/722] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:25.140 [473/722] Linking target lib/librte_dmadev.so.24.2 00:02:25.140 [474/722] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.140 [475/722] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:02:25.404 [476/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:25.404 [477/722] Linking target lib/librte_jobstats.so.24.2 00:02:25.404 [478/722] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:25.404 [479/722] Linking static target drivers/librte_mempool_ring.a 00:02:25.404 [480/722] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:25.404 [481/722] Linking target lib/librte_rawdev.so.24.2 00:02:25.404 [482/722] Linking target lib/librte_rcu.so.24.2 00:02:25.404 [483/722] Linking target lib/librte_mempool.so.24.2 00:02:25.404 [484/722] Linking target lib/librte_stack.so.24.2 00:02:25.404 [485/722] Linking target drivers/librte_bus_pci.so.24.2 00:02:25.404 [486/722] Linking target drivers/librte_bus_vdev.so.24.2 00:02:25.404 [487/722] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:25.404 [488/722] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:02:25.404 [489/722] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:25.404 [490/722] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:25.404 [491/722] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:25.404 [492/722] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:25.404 [493/722] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:02:25.404 [494/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:25.667 [495/722] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:02:25.667 [496/722] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:02:25.667 [497/722] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:02:25.667 [498/722] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:02:25.667 [499/722] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:25.667 [500/722] Linking target drivers/librte_mempool_ring.so.24.2 00:02:25.667 [501/722] Linking target lib/librte_rib.so.24.2 00:02:25.667 [502/722] Linking target lib/librte_mbuf.so.24.2 00:02:25.667 [503/722] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:25.667 [504/722] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:25.667 [505/722] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:02:25.928 [506/722] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:25.928 [507/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:25.928 [508/722] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:25.928 [509/722] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:02:25.928 [510/722] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:02:25.928 [511/722] Linking target lib/librte_fib.so.24.2 00:02:25.928 [512/722] Linking target lib/librte_net.so.24.2 00:02:25.928 [513/722] Linking target lib/librte_bbdev.so.24.2 00:02:26.189 [514/722] Linking target lib/librte_compressdev.so.24.2 00:02:26.189 [515/722] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:26.189 [516/722] Linking target lib/librte_cryptodev.so.24.2 00:02:26.189 [517/722] Linking target lib/librte_distributor.so.24.2 00:02:26.189 [518/722] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:02:26.189 [519/722] Linking target lib/librte_gpudev.so.24.2 00:02:26.459 [520/722] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:26.459 [521/722] Linking target lib/librte_cmdline.so.24.2 00:02:26.459 [522/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:26.459 [523/722] Linking target lib/librte_hash.so.24.2 00:02:26.459 [524/722] Linking target lib/librte_regexdev.so.24.2 00:02:26.459 [525/722] Linking target lib/librte_reorder.so.24.2 00:02:26.459 [526/722] Linking target lib/librte_mldev.so.24.2 00:02:26.459 [527/722] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:02:26.459 [528/722] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:26.459 [529/722] Linking target lib/librte_sched.so.24.2 00:02:26.459 [530/722] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:26.459 [531/722] Linking target lib/librte_security.so.24.2 00:02:26.748 [532/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:26.748 [533/722] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:02:26.748 [534/722] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:26.748 [535/722] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:02:26.748 [536/722] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:02:26.748 [537/722] Linking target lib/librte_efd.so.24.2 00:02:26.748 [538/722] Linking target lib/librte_lpm.so.24.2 00:02:26.748 [539/722] Linking target lib/librte_member.so.24.2 00:02:26.748 [540/722] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:02:26.748 [541/722] Linking target lib/librte_ipsec.so.24.2 00:02:26.748 [542/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:27.015 [543/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:27.015 [544/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:27.015 [545/722] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:02:27.016 [546/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:27.016 [547/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:27.016 [548/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:27.016 [549/722] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:02:27.279 [550/722] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:27.279 [551/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:27.279 [552/722] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:27.280 [553/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:27.280 [554/722] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:27.280 [555/722] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:27.541 [556/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:27.541 [557/722] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:27.541 [558/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:27.541 [559/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:27.541 [560/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:27.541 [561/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:27.801 [562/722] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:27.801 [563/722] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:27.801 [564/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:28.061 [565/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:28.061 [566/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:28.323 [567/722] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:28.323 [568/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:28.323 [569/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:28.323 [570/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:28.585 [571/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:28.585 [572/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:28.585 [573/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:28.851 [574/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:28.851 [575/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:29.113 [576/722] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:29.113 [577/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:29.113 [578/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:29.113 [579/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:29.374 [580/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:29.374 [581/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:29.374 [582/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:29.374 [583/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:29.374 [584/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:29.636 [585/722] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:29.636 [586/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:29.636 [587/722] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.636 [588/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:29.636 [589/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:29.636 [590/722] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:29.636 [591/722] Linking target lib/librte_ethdev.so.24.2 00:02:29.636 [592/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:29.901 [593/722] Linking static target lib/librte_pdcp.a 00:02:29.901 [594/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:29.901 [595/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:29.901 [596/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:29.901 [597/722] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:02:29.901 [598/722] Linking target lib/librte_metrics.so.24.2 00:02:30.162 [599/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:30.162 [600/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:30.162 [601/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:30.162 [602/722] Linking target lib/librte_bpf.so.24.2 00:02:30.162 [603/722] Linking target lib/librte_gro.so.24.2 00:02:30.162 [604/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:30.162 [605/722] Linking target lib/librte_eventdev.so.24.2 00:02:30.162 [606/722] Linking target lib/librte_gso.so.24.2 00:02:30.162 [607/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:30.424 [608/722] Linking target lib/librte_ip_frag.so.24.2 00:02:30.424 [609/722] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:02:30.424 [610/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:30.424 [611/722] Linking target lib/librte_pcapng.so.24.2 00:02:30.424 [612/722] Linking target lib/librte_power.so.24.2 00:02:30.424 [613/722] Linking target lib/librte_bitratestats.so.24.2 00:02:30.424 [614/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:30.424 [615/722] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.424 [616/722] Linking target lib/librte_latencystats.so.24.2 00:02:30.424 [617/722] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:02:30.424 [618/722] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:02:30.424 [619/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:30.424 [620/722] Linking target lib/librte_pdcp.so.24.2 00:02:30.424 [621/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:30.424 [622/722] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:02:30.424 [623/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:30.687 [624/722] Linking target lib/librte_dispatcher.so.24.2 00:02:30.687 [625/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:30.687 [626/722] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:30.687 [627/722] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:02:30.687 [628/722] Linking target lib/librte_port.so.24.2 00:02:30.687 [629/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:30.687 [630/722] Linking target lib/librte_pdump.so.24.2 00:02:30.687 [631/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:30.687 [632/722] Linking target lib/librte_graph.so.24.2 00:02:30.948 [633/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:30.948 [634/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:30.948 [635/722] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:02:30.948 [636/722] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:30.948 [637/722] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:02:30.948 [638/722] Linking target lib/librte_table.so.24.2 00:02:30.948 [639/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:31.209 [640/722] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:02:31.468 [641/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:31.468 [642/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:31.468 [643/722] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:31.727 [644/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:31.728 [645/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:31.728 [646/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:31.728 [647/722] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:31.728 [648/722] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:31.987 [649/722] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:31.987 [650/722] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:31.987 [651/722] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:31.987 [652/722] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:31.987 [653/722] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:31.987 [654/722] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:31.987 [655/722] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:32.245 [656/722] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:32.245 [657/722] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:32.245 [658/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:32.504 [659/722] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:32.504 [660/722] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:32.504 [661/722] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:32.762 [662/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:32.762 [663/722] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:02:32.762 [664/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:32.762 [665/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:32.762 [666/722] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:33.020 [667/722] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:33.020 [668/722] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:33.020 [669/722] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:33.020 [670/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:33.278 [671/722] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:33.278 [672/722] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:33.278 [673/722] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:33.278 [674/722] Linking static target drivers/librte_net_i40e.a 00:02:33.278 [675/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:33.278 [676/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:33.536 [677/722] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:33.793 [678/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:33.793 [679/722] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.051 [680/722] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:34.051 [681/722] Linking target drivers/librte_net_i40e.so.24.2 00:02:34.615 [682/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:34.615 [683/722] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:34.615 [684/722] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:34.615 [685/722] Linking static target lib/librte_node.a 00:02:34.872 [686/722] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:34.872 [687/722] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.130 [688/722] Linking target lib/librte_node.so.24.2 00:02:36.501 [689/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:36.501 [690/722] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:36.759 [691/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:38.132 [692/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:38.389 [693/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:44.942 [694/722] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:17.075 [695/722] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:17.075 [696/722] Linking static target lib/librte_vhost.a 00:03:17.075 [697/722] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.075 [698/722] Linking target lib/librte_vhost.so.24.2 00:03:29.266 [699/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:29.266 [700/722] Linking static target lib/librte_pipeline.a 00:03:29.266 [701/722] Linking target app/dpdk-proc-info 00:03:29.266 [702/722] Linking target app/dpdk-dumpcap 00:03:29.266 [703/722] Linking target app/dpdk-test-pipeline 00:03:29.266 [704/722] Linking target app/dpdk-test-gpudev 00:03:29.267 [705/722] Linking target app/dpdk-pdump 00:03:29.267 [706/722] Linking target app/dpdk-test-fib 00:03:29.267 [707/722] Linking target app/dpdk-test-flow-perf 00:03:29.267 [708/722] Linking target app/dpdk-test-acl 00:03:29.267 [709/722] Linking target app/dpdk-test-cmdline 00:03:29.267 [710/722] Linking target app/dpdk-test-dma-perf 00:03:29.267 [711/722] Linking target app/dpdk-test-regex 00:03:29.267 [712/722] Linking target app/dpdk-graph 00:03:29.267 [713/722] Linking target app/dpdk-test-mldev 00:03:29.267 [714/722] Linking target app/dpdk-test-security-perf 00:03:29.267 [715/722] Linking target app/dpdk-test-sad 00:03:29.267 [716/722] Linking target app/dpdk-test-bbdev 00:03:29.267 [717/722] Linking target app/dpdk-test-compress-perf 00:03:29.267 [718/722] Linking target app/dpdk-test-crypto-perf 00:03:29.267 [719/722] Linking target app/dpdk-test-eventdev 00:03:29.267 [720/722] Linking target app/dpdk-testpmd 00:03:30.641 [721/722] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.641 [722/722] Linking target lib/librte_pipeline.so.24.2 00:03:30.641 15:21:43 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:30.641 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:30.641 [0/1] Installing files. 00:03:30.903 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:30.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:30.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:30.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:30.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:30.909 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_argparse.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.909 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.910 Installing lib/librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.910 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.910 Installing lib/librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.910 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.910 Installing lib/librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing lib/librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing drivers/librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:03:31.476 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing drivers/librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:03:31.476 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing drivers/librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:03:31.476 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.476 Installing drivers/librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:03:31.476 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.476 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.476 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.476 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.476 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.476 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.476 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.476 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.476 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.476 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.476 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.476 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.476 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.476 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.476 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.476 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.476 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.476 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.476 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.476 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/argparse/rte_argparse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:31.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:31.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:31.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:31.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.479 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:31.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:31.741 Installing symlink pointing to librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:03:31.741 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:03:31.741 Installing symlink pointing to librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:03:31.741 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:31.741 Installing symlink pointing to librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so.24 00:03:31.741 Installing symlink pointing to librte_argparse.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so 00:03:31.741 Installing symlink pointing to librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:03:31.741 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:31.741 Installing symlink pointing to librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:03:31.741 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:31.741 Installing symlink pointing to librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:03:31.741 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:31.741 Installing symlink pointing to librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:03:31.741 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:31.741 Installing symlink pointing to librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:03:31.741 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:31.741 Installing symlink pointing to librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:03:31.741 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:31.741 Installing symlink pointing to librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:03:31.741 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:31.741 Installing symlink pointing to librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:03:31.741 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:31.741 Installing symlink pointing to librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:03:31.741 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:31.741 Installing symlink pointing to librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:03:31.741 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:31.741 Installing symlink pointing to librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:03:31.741 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:31.741 Installing symlink pointing to librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:03:31.741 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:31.741 Installing symlink pointing to librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:03:31.741 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:31.741 Installing symlink pointing to librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:03:31.741 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:31.741 Installing symlink pointing to librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:03:31.741 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:31.741 Installing symlink pointing to librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:03:31.741 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:31.741 Installing symlink pointing to librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:03:31.741 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:31.741 Installing symlink pointing to librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:03:31.741 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:31.741 Installing symlink pointing to librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:03:31.741 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:31.741 Installing symlink pointing to librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:03:31.741 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:31.741 Installing symlink pointing to librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:03:31.741 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:31.741 Installing symlink pointing to librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:03:31.741 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:31.741 Installing symlink pointing to librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:03:31.742 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:31.742 Installing symlink pointing to librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:03:31.742 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:31.742 Installing symlink pointing to librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:03:31.742 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:31.742 Installing symlink pointing to librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:03:31.742 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:03:31.742 Installing symlink pointing to librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:03:31.742 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:31.742 Installing symlink pointing to librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:03:31.742 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:31.742 Installing symlink pointing to librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:03:31.742 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:31.742 Installing symlink pointing to librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:03:31.742 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:31.742 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:03:31.742 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:03:31.742 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:03:31.742 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:03:31.742 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:03:31.742 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:03:31.742 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:03:31.742 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:03:31.742 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:03:31.742 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:03:31.742 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:03:31.742 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:03:31.742 Installing symlink pointing to librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:03:31.742 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:31.742 Installing symlink pointing to librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:03:31.742 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:31.742 Installing symlink pointing to librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:03:31.742 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:31.742 Installing symlink pointing to librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:03:31.742 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:31.742 Installing symlink pointing to librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:03:31.742 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:31.742 Installing symlink pointing to librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:03:31.742 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:31.742 Installing symlink pointing to librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:03:31.742 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:31.742 Installing symlink pointing to librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:03:31.742 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:31.742 Installing symlink pointing to librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:03:31.742 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:03:31.742 Installing symlink pointing to librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:03:31.742 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:31.742 Installing symlink pointing to librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:03:31.742 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:31.742 Installing symlink pointing to librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:03:31.742 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:31.742 Installing symlink pointing to librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:03:31.742 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:31.742 Installing symlink pointing to librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:03:31.742 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:31.742 Installing symlink pointing to librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:03:31.742 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:31.742 Installing symlink pointing to librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:03:31.742 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:31.742 Installing symlink pointing to librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:03:31.742 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:03:31.742 Installing symlink pointing to librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:03:31.742 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:31.742 Installing symlink pointing to librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:03:31.742 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:31.742 Installing symlink pointing to librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:03:31.742 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:31.742 Installing symlink pointing to librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:03:31.742 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:31.742 Installing symlink pointing to librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:03:31.742 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:31.742 Installing symlink pointing to librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:03:31.742 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:31.742 Installing symlink pointing to librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:03:31.742 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:31.742 Installing symlink pointing to librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:03:31.742 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:03:31.742 Installing symlink pointing to librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:03:31.742 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:03:31.742 Installing symlink pointing to librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:03:31.742 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:03:31.742 Installing symlink pointing to librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:03:31.742 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:03:31.742 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:03:31.742 15:21:44 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:03:31.742 15:21:44 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:31.742 15:21:44 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:03:31.742 15:21:44 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:31.742 00:03:31.742 real 1m28.216s 00:03:31.742 user 18m32.258s 00:03:31.742 sys 2m12.648s 00:03:31.742 15:21:44 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:31.742 15:21:44 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:31.742 ************************************ 00:03:31.742 END TEST build_native_dpdk 00:03:31.742 ************************************ 00:03:31.742 15:21:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:31.742 15:21:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:31.742 15:21:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:31.742 15:21:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:31.742 15:21:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:31.742 15:21:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:31.742 15:21:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:31.742 15:21:44 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:31.742 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:32.000 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:32.000 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:32.000 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:32.258 Using 'verbs' RDMA provider 00:03:42.796 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:50.947 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:51.205 Creating mk/config.mk...done. 00:03:51.205 Creating mk/cc.flags.mk...done. 00:03:51.205 Type 'make' to build. 00:03:51.205 15:22:04 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:51.205 15:22:04 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:51.205 15:22:04 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:51.205 15:22:04 -- common/autotest_common.sh@10 -- $ set +x 00:03:51.205 ************************************ 00:03:51.205 START TEST make 00:03:51.205 ************************************ 00:03:51.205 15:22:04 make -- common/autotest_common.sh@1121 -- $ make -j48 00:03:51.463 make[1]: Nothing to be done for 'all'. 00:03:53.379 The Meson build system 00:03:53.379 Version: 1.3.1 00:03:53.379 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:53.379 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:53.379 Build type: native build 00:03:53.379 Project name: libvfio-user 00:03:53.379 Project version: 0.0.1 00:03:53.379 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:53.379 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:53.379 Host machine cpu family: x86_64 00:03:53.379 Host machine cpu: x86_64 00:03:53.379 Run-time dependency threads found: YES 00:03:53.379 Library dl found: YES 00:03:53.379 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:53.379 Run-time dependency json-c found: YES 0.17 00:03:53.379 Run-time dependency cmocka found: YES 1.1.7 00:03:53.379 Program pytest-3 found: NO 00:03:53.379 Program flake8 found: NO 00:03:53.379 Program misspell-fixer found: NO 00:03:53.379 Program restructuredtext-lint found: NO 00:03:53.379 Program valgrind found: YES (/usr/bin/valgrind) 00:03:53.379 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:53.379 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:53.379 Compiler for C supports arguments -Wwrite-strings: YES 00:03:53.379 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:53.379 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:53.379 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:53.379 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:53.380 Build targets in project: 8 00:03:53.380 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:53.380 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:53.380 00:03:53.380 libvfio-user 0.0.1 00:03:53.380 00:03:53.380 User defined options 00:03:53.380 buildtype : debug 00:03:53.380 default_library: shared 00:03:53.380 libdir : /usr/local/lib 00:03:53.380 00:03:53.380 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:53.957 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:53.957 [1/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:53.957 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:54.221 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:54.221 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:54.221 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:54.221 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:54.221 [7/37] Compiling C object samples/null.p/null.c.o 00:03:54.221 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:54.221 [9/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:54.221 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:54.221 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:54.221 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:54.221 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:54.221 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:54.221 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:54.221 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:54.221 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:54.221 [18/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:54.221 [19/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:54.221 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:54.221 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:54.221 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:54.221 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:54.221 [24/37] Linking target lib/libvfio-user.so.0.0.1 00:03:54.221 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:54.221 [26/37] Compiling C object samples/server.p/server.c.o 00:03:54.221 [27/37] Compiling C object samples/client.p/client.c.o 00:03:54.221 [28/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:54.482 [29/37] Linking target samples/client 00:03:54.482 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:54.482 [31/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:54.482 [32/37] Linking target test/unit_tests 00:03:54.482 [33/37] Linking target samples/shadow_ioeventfd_server 00:03:54.482 [34/37] Linking target samples/server 00:03:54.482 [35/37] Linking target samples/gpio-pci-idio-16 00:03:54.482 [36/37] Linking target samples/null 00:03:54.482 [37/37] Linking target samples/lspci 00:03:54.482 INFO: autodetecting backend as ninja 00:03:54.482 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:54.742 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:55.319 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:55.319 ninja: no work to do. 00:04:07.512 CC lib/log/log.o 00:04:07.512 CC lib/log/log_flags.o 00:04:07.512 CC lib/log/log_deprecated.o 00:04:07.512 CC lib/ut/ut.o 00:04:07.512 CC lib/ut_mock/mock.o 00:04:07.512 LIB libspdk_ut_mock.a 00:04:07.512 LIB libspdk_log.a 00:04:07.512 SO libspdk_ut_mock.so.6.0 00:04:07.512 LIB libspdk_ut.a 00:04:07.512 SO libspdk_log.so.7.0 00:04:07.512 SO libspdk_ut.so.2.0 00:04:07.512 SYMLINK libspdk_ut_mock.so 00:04:07.512 SYMLINK libspdk_ut.so 00:04:07.512 SYMLINK libspdk_log.so 00:04:07.512 CXX lib/trace_parser/trace.o 00:04:07.512 CC lib/dma/dma.o 00:04:07.512 CC lib/util/base64.o 00:04:07.512 CC lib/util/bit_array.o 00:04:07.512 CC lib/util/cpuset.o 00:04:07.512 CC lib/ioat/ioat.o 00:04:07.512 CC lib/util/crc16.o 00:04:07.512 CC lib/util/crc32.o 00:04:07.512 CC lib/util/crc32c.o 00:04:07.512 CC lib/util/crc32_ieee.o 00:04:07.512 CC lib/util/crc64.o 00:04:07.512 CC lib/util/dif.o 00:04:07.512 CC lib/util/fd.o 00:04:07.512 CC lib/util/file.o 00:04:07.512 CC lib/util/hexlify.o 00:04:07.512 CC lib/util/iov.o 00:04:07.512 CC lib/util/math.o 00:04:07.512 CC lib/util/pipe.o 00:04:07.512 CC lib/util/strerror_tls.o 00:04:07.512 CC lib/util/string.o 00:04:07.512 CC lib/util/uuid.o 00:04:07.512 CC lib/util/fd_group.o 00:04:07.512 CC lib/util/xor.o 00:04:07.512 CC lib/util/zipf.o 00:04:07.512 CC lib/vfio_user/host/vfio_user_pci.o 00:04:07.512 CC lib/vfio_user/host/vfio_user.o 00:04:07.770 LIB libspdk_dma.a 00:04:07.770 LIB libspdk_ioat.a 00:04:07.770 SO libspdk_dma.so.4.0 00:04:07.770 SO libspdk_ioat.so.7.0 00:04:07.770 SYMLINK libspdk_dma.so 00:04:07.770 SYMLINK libspdk_ioat.so 00:04:07.770 LIB libspdk_vfio_user.a 00:04:07.770 SO libspdk_vfio_user.so.5.0 00:04:07.770 SYMLINK libspdk_vfio_user.so 00:04:08.028 LIB libspdk_util.a 00:04:08.028 SO libspdk_util.so.9.0 00:04:08.285 SYMLINK libspdk_util.so 00:04:08.285 CC lib/idxd/idxd.o 00:04:08.286 CC lib/vmd/vmd.o 00:04:08.286 CC lib/idxd/idxd_user.o 00:04:08.286 CC lib/vmd/led.o 00:04:08.286 CC lib/rdma/common.o 00:04:08.286 CC lib/json/json_parse.o 00:04:08.286 CC lib/rdma/rdma_verbs.o 00:04:08.286 CC lib/conf/conf.o 00:04:08.286 CC lib/json/json_util.o 00:04:08.286 CC lib/env_dpdk/env.o 00:04:08.286 CC lib/json/json_write.o 00:04:08.286 CC lib/env_dpdk/memory.o 00:04:08.286 CC lib/env_dpdk/pci.o 00:04:08.286 CC lib/env_dpdk/init.o 00:04:08.286 CC lib/env_dpdk/threads.o 00:04:08.286 CC lib/env_dpdk/pci_ioat.o 00:04:08.286 CC lib/env_dpdk/pci_virtio.o 00:04:08.286 CC lib/env_dpdk/pci_vmd.o 00:04:08.286 CC lib/env_dpdk/pci_idxd.o 00:04:08.286 CC lib/env_dpdk/pci_event.o 00:04:08.286 CC lib/env_dpdk/sigbus_handler.o 00:04:08.286 CC lib/env_dpdk/pci_dpdk.o 00:04:08.286 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:08.286 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:08.286 LIB libspdk_trace_parser.a 00:04:08.286 SO libspdk_trace_parser.so.5.0 00:04:08.543 SYMLINK libspdk_trace_parser.so 00:04:08.543 LIB libspdk_conf.a 00:04:08.543 SO libspdk_conf.so.6.0 00:04:08.543 LIB libspdk_json.a 00:04:08.543 LIB libspdk_rdma.a 00:04:08.801 SYMLINK libspdk_conf.so 00:04:08.801 SO libspdk_json.so.6.0 00:04:08.801 SO libspdk_rdma.so.6.0 00:04:08.801 SYMLINK libspdk_json.so 00:04:08.801 SYMLINK libspdk_rdma.so 00:04:08.801 LIB libspdk_idxd.a 00:04:08.801 CC lib/jsonrpc/jsonrpc_server.o 00:04:08.801 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:08.801 CC lib/jsonrpc/jsonrpc_client.o 00:04:08.801 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:09.059 SO libspdk_idxd.so.12.0 00:04:09.059 SYMLINK libspdk_idxd.so 00:04:09.059 LIB libspdk_vmd.a 00:04:09.059 SO libspdk_vmd.so.6.0 00:04:09.059 SYMLINK libspdk_vmd.so 00:04:09.059 LIB libspdk_jsonrpc.a 00:04:09.316 SO libspdk_jsonrpc.so.6.0 00:04:09.316 SYMLINK libspdk_jsonrpc.so 00:04:09.574 CC lib/rpc/rpc.o 00:04:09.574 LIB libspdk_rpc.a 00:04:09.574 SO libspdk_rpc.so.6.0 00:04:09.831 SYMLINK libspdk_rpc.so 00:04:09.831 CC lib/keyring/keyring.o 00:04:09.831 CC lib/keyring/keyring_rpc.o 00:04:09.831 CC lib/notify/notify.o 00:04:09.831 CC lib/trace/trace.o 00:04:09.831 CC lib/notify/notify_rpc.o 00:04:09.831 CC lib/trace/trace_flags.o 00:04:09.831 CC lib/trace/trace_rpc.o 00:04:10.089 LIB libspdk_notify.a 00:04:10.089 SO libspdk_notify.so.6.0 00:04:10.089 LIB libspdk_keyring.a 00:04:10.089 SYMLINK libspdk_notify.so 00:04:10.089 LIB libspdk_trace.a 00:04:10.089 SO libspdk_keyring.so.1.0 00:04:10.089 SO libspdk_trace.so.10.0 00:04:10.347 SYMLINK libspdk_keyring.so 00:04:10.347 SYMLINK libspdk_trace.so 00:04:10.347 LIB libspdk_env_dpdk.a 00:04:10.347 CC lib/thread/thread.o 00:04:10.347 CC lib/thread/iobuf.o 00:04:10.347 CC lib/sock/sock.o 00:04:10.347 CC lib/sock/sock_rpc.o 00:04:10.347 SO libspdk_env_dpdk.so.14.0 00:04:10.604 SYMLINK libspdk_env_dpdk.so 00:04:10.861 LIB libspdk_sock.a 00:04:10.861 SO libspdk_sock.so.9.0 00:04:10.861 SYMLINK libspdk_sock.so 00:04:11.118 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:11.118 CC lib/nvme/nvme_ctrlr.o 00:04:11.118 CC lib/nvme/nvme_fabric.o 00:04:11.118 CC lib/nvme/nvme_ns_cmd.o 00:04:11.118 CC lib/nvme/nvme_ns.o 00:04:11.118 CC lib/nvme/nvme_pcie_common.o 00:04:11.118 CC lib/nvme/nvme_pcie.o 00:04:11.118 CC lib/nvme/nvme_qpair.o 00:04:11.118 CC lib/nvme/nvme.o 00:04:11.118 CC lib/nvme/nvme_quirks.o 00:04:11.118 CC lib/nvme/nvme_transport.o 00:04:11.118 CC lib/nvme/nvme_discovery.o 00:04:11.118 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:11.118 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:11.118 CC lib/nvme/nvme_tcp.o 00:04:11.118 CC lib/nvme/nvme_opal.o 00:04:11.118 CC lib/nvme/nvme_io_msg.o 00:04:11.118 CC lib/nvme/nvme_poll_group.o 00:04:11.118 CC lib/nvme/nvme_zns.o 00:04:11.118 CC lib/nvme/nvme_stubs.o 00:04:11.118 CC lib/nvme/nvme_auth.o 00:04:11.118 CC lib/nvme/nvme_cuse.o 00:04:11.118 CC lib/nvme/nvme_vfio_user.o 00:04:11.118 CC lib/nvme/nvme_rdma.o 00:04:12.050 LIB libspdk_thread.a 00:04:12.050 SO libspdk_thread.so.10.0 00:04:12.050 SYMLINK libspdk_thread.so 00:04:12.307 CC lib/blob/blobstore.o 00:04:12.307 CC lib/accel/accel.o 00:04:12.307 CC lib/virtio/virtio.o 00:04:12.307 CC lib/vfu_tgt/tgt_endpoint.o 00:04:12.307 CC lib/virtio/virtio_vhost_user.o 00:04:12.307 CC lib/blob/request.o 00:04:12.307 CC lib/init/json_config.o 00:04:12.307 CC lib/virtio/virtio_vfio_user.o 00:04:12.307 CC lib/vfu_tgt/tgt_rpc.o 00:04:12.307 CC lib/accel/accel_rpc.o 00:04:12.307 CC lib/blob/zeroes.o 00:04:12.307 CC lib/init/subsystem.o 00:04:12.307 CC lib/accel/accel_sw.o 00:04:12.307 CC lib/virtio/virtio_pci.o 00:04:12.307 CC lib/blob/blob_bs_dev.o 00:04:12.307 CC lib/init/rpc.o 00:04:12.307 CC lib/init/subsystem_rpc.o 00:04:12.565 LIB libspdk_init.a 00:04:12.565 SO libspdk_init.so.5.0 00:04:12.565 LIB libspdk_virtio.a 00:04:12.565 SYMLINK libspdk_init.so 00:04:12.565 LIB libspdk_vfu_tgt.a 00:04:12.565 SO libspdk_virtio.so.7.0 00:04:12.565 SO libspdk_vfu_tgt.so.3.0 00:04:12.823 SYMLINK libspdk_virtio.so 00:04:12.823 SYMLINK libspdk_vfu_tgt.so 00:04:12.823 CC lib/event/app.o 00:04:12.823 CC lib/event/reactor.o 00:04:12.823 CC lib/event/log_rpc.o 00:04:12.823 CC lib/event/app_rpc.o 00:04:12.823 CC lib/event/scheduler_static.o 00:04:13.080 LIB libspdk_event.a 00:04:13.338 SO libspdk_event.so.13.0 00:04:13.338 LIB libspdk_accel.a 00:04:13.338 SYMLINK libspdk_event.so 00:04:13.338 SO libspdk_accel.so.15.0 00:04:13.338 SYMLINK libspdk_accel.so 00:04:13.338 LIB libspdk_nvme.a 00:04:13.595 CC lib/bdev/bdev.o 00:04:13.596 CC lib/bdev/bdev_rpc.o 00:04:13.596 CC lib/bdev/bdev_zone.o 00:04:13.596 CC lib/bdev/part.o 00:04:13.596 CC lib/bdev/scsi_nvme.o 00:04:13.596 SO libspdk_nvme.so.13.0 00:04:13.884 SYMLINK libspdk_nvme.so 00:04:15.253 LIB libspdk_blob.a 00:04:15.253 SO libspdk_blob.so.11.0 00:04:15.253 SYMLINK libspdk_blob.so 00:04:15.511 CC lib/lvol/lvol.o 00:04:15.511 CC lib/blobfs/blobfs.o 00:04:15.511 CC lib/blobfs/tree.o 00:04:16.076 LIB libspdk_bdev.a 00:04:16.076 SO libspdk_bdev.so.15.0 00:04:16.337 SYMLINK libspdk_bdev.so 00:04:16.337 LIB libspdk_blobfs.a 00:04:16.337 SO libspdk_blobfs.so.10.0 00:04:16.337 SYMLINK libspdk_blobfs.so 00:04:16.337 CC lib/nbd/nbd.o 00:04:16.337 CC lib/ublk/ublk.o 00:04:16.337 CC lib/nvmf/ctrlr.o 00:04:16.337 CC lib/scsi/dev.o 00:04:16.337 CC lib/nbd/nbd_rpc.o 00:04:16.337 CC lib/nvmf/ctrlr_discovery.o 00:04:16.337 CC lib/scsi/lun.o 00:04:16.337 CC lib/ftl/ftl_core.o 00:04:16.337 CC lib/nvmf/ctrlr_bdev.o 00:04:16.337 CC lib/ublk/ublk_rpc.o 00:04:16.337 CC lib/scsi/port.o 00:04:16.337 CC lib/nvmf/subsystem.o 00:04:16.337 CC lib/ftl/ftl_init.o 00:04:16.337 CC lib/scsi/scsi.o 00:04:16.337 CC lib/ftl/ftl_layout.o 00:04:16.337 CC lib/nvmf/nvmf.o 00:04:16.337 CC lib/scsi/scsi_bdev.o 00:04:16.337 CC lib/nvmf/nvmf_rpc.o 00:04:16.337 CC lib/ftl/ftl_debug.o 00:04:16.337 CC lib/scsi/scsi_pr.o 00:04:16.337 CC lib/ftl/ftl_io.o 00:04:16.337 CC lib/scsi/scsi_rpc.o 00:04:16.337 CC lib/nvmf/tcp.o 00:04:16.337 CC lib/nvmf/transport.o 00:04:16.337 CC lib/ftl/ftl_sb.o 00:04:16.337 CC lib/scsi/task.o 00:04:16.337 CC lib/nvmf/stubs.o 00:04:16.337 CC lib/ftl/ftl_l2p_flat.o 00:04:16.338 CC lib/ftl/ftl_l2p.o 00:04:16.338 CC lib/nvmf/mdns_server.o 00:04:16.338 CC lib/nvmf/vfio_user.o 00:04:16.338 CC lib/ftl/ftl_band.o 00:04:16.338 CC lib/ftl/ftl_nv_cache.o 00:04:16.338 CC lib/nvmf/rdma.o 00:04:16.338 CC lib/nvmf/auth.o 00:04:16.338 CC lib/ftl/ftl_band_ops.o 00:04:16.338 CC lib/ftl/ftl_rq.o 00:04:16.338 CC lib/ftl/ftl_writer.o 00:04:16.338 CC lib/ftl/ftl_reloc.o 00:04:16.338 CC lib/ftl/ftl_l2p_cache.o 00:04:16.338 CC lib/ftl/ftl_p2l.o 00:04:16.338 CC lib/ftl/mngt/ftl_mngt.o 00:04:16.338 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:16.338 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:16.338 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:16.338 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:16.338 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:16.338 LIB libspdk_lvol.a 00:04:16.600 SO libspdk_lvol.so.10.0 00:04:16.600 SYMLINK libspdk_lvol.so 00:04:16.600 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:16.864 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:16.864 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:16.864 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:16.864 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:16.864 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:16.864 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:16.864 CC lib/ftl/utils/ftl_conf.o 00:04:16.864 CC lib/ftl/utils/ftl_md.o 00:04:16.864 CC lib/ftl/utils/ftl_mempool.o 00:04:16.864 CC lib/ftl/utils/ftl_bitmap.o 00:04:16.864 CC lib/ftl/utils/ftl_property.o 00:04:16.864 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:16.864 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:16.864 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:16.864 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:16.864 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:17.124 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:17.124 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:17.124 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:17.124 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:17.124 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:17.124 CC lib/ftl/base/ftl_base_dev.o 00:04:17.124 CC lib/ftl/base/ftl_base_bdev.o 00:04:17.124 CC lib/ftl/ftl_trace.o 00:04:17.382 LIB libspdk_nbd.a 00:04:17.382 SO libspdk_nbd.so.7.0 00:04:17.382 SYMLINK libspdk_nbd.so 00:04:17.382 LIB libspdk_scsi.a 00:04:17.382 SO libspdk_scsi.so.9.0 00:04:17.640 SYMLINK libspdk_scsi.so 00:04:17.640 LIB libspdk_ublk.a 00:04:17.640 SO libspdk_ublk.so.3.0 00:04:17.640 SYMLINK libspdk_ublk.so 00:04:17.640 CC lib/vhost/vhost.o 00:04:17.640 CC lib/iscsi/conn.o 00:04:17.640 CC lib/vhost/vhost_rpc.o 00:04:17.640 CC lib/iscsi/init_grp.o 00:04:17.640 CC lib/vhost/vhost_scsi.o 00:04:17.640 CC lib/iscsi/iscsi.o 00:04:17.640 CC lib/vhost/vhost_blk.o 00:04:17.640 CC lib/iscsi/md5.o 00:04:17.640 CC lib/vhost/rte_vhost_user.o 00:04:17.640 CC lib/iscsi/param.o 00:04:17.640 CC lib/iscsi/portal_grp.o 00:04:17.640 CC lib/iscsi/tgt_node.o 00:04:17.640 CC lib/iscsi/iscsi_subsystem.o 00:04:17.640 CC lib/iscsi/iscsi_rpc.o 00:04:17.640 CC lib/iscsi/task.o 00:04:17.898 LIB libspdk_ftl.a 00:04:18.155 SO libspdk_ftl.so.9.0 00:04:18.413 SYMLINK libspdk_ftl.so 00:04:18.978 LIB libspdk_vhost.a 00:04:18.978 SO libspdk_vhost.so.8.0 00:04:18.978 LIB libspdk_nvmf.a 00:04:18.978 SYMLINK libspdk_vhost.so 00:04:18.978 SO libspdk_nvmf.so.18.0 00:04:19.236 LIB libspdk_iscsi.a 00:04:19.236 SO libspdk_iscsi.so.8.0 00:04:19.236 SYMLINK libspdk_nvmf.so 00:04:19.236 SYMLINK libspdk_iscsi.so 00:04:19.493 CC module/env_dpdk/env_dpdk_rpc.o 00:04:19.493 CC module/vfu_device/vfu_virtio.o 00:04:19.493 CC module/vfu_device/vfu_virtio_blk.o 00:04:19.493 CC module/vfu_device/vfu_virtio_scsi.o 00:04:19.493 CC module/vfu_device/vfu_virtio_rpc.o 00:04:19.751 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:19.751 CC module/keyring/file/keyring.o 00:04:19.751 CC module/sock/posix/posix.o 00:04:19.751 CC module/keyring/file/keyring_rpc.o 00:04:19.752 CC module/accel/dsa/accel_dsa.o 00:04:19.752 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:19.752 CC module/accel/dsa/accel_dsa_rpc.o 00:04:19.752 CC module/scheduler/gscheduler/gscheduler.o 00:04:19.752 CC module/accel/ioat/accel_ioat.o 00:04:19.752 CC module/blob/bdev/blob_bdev.o 00:04:19.752 CC module/accel/ioat/accel_ioat_rpc.o 00:04:19.752 CC module/accel/iaa/accel_iaa.o 00:04:19.752 CC module/accel/iaa/accel_iaa_rpc.o 00:04:19.752 CC module/accel/error/accel_error.o 00:04:19.752 CC module/accel/error/accel_error_rpc.o 00:04:19.752 LIB libspdk_env_dpdk_rpc.a 00:04:19.752 SO libspdk_env_dpdk_rpc.so.6.0 00:04:19.752 SYMLINK libspdk_env_dpdk_rpc.so 00:04:19.752 LIB libspdk_keyring_file.a 00:04:19.752 LIB libspdk_scheduler_dpdk_governor.a 00:04:19.752 LIB libspdk_scheduler_gscheduler.a 00:04:20.009 SO libspdk_keyring_file.so.1.0 00:04:20.009 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:20.009 SO libspdk_scheduler_gscheduler.so.4.0 00:04:20.009 LIB libspdk_accel_error.a 00:04:20.009 LIB libspdk_accel_ioat.a 00:04:20.009 LIB libspdk_scheduler_dynamic.a 00:04:20.009 LIB libspdk_accel_iaa.a 00:04:20.009 SO libspdk_accel_error.so.2.0 00:04:20.009 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:20.009 SO libspdk_accel_ioat.so.6.0 00:04:20.009 SO libspdk_scheduler_dynamic.so.4.0 00:04:20.009 SYMLINK libspdk_scheduler_gscheduler.so 00:04:20.009 SYMLINK libspdk_keyring_file.so 00:04:20.009 LIB libspdk_accel_dsa.a 00:04:20.009 SO libspdk_accel_iaa.so.3.0 00:04:20.009 SO libspdk_accel_dsa.so.5.0 00:04:20.009 SYMLINK libspdk_accel_error.so 00:04:20.009 SYMLINK libspdk_scheduler_dynamic.so 00:04:20.009 SYMLINK libspdk_accel_ioat.so 00:04:20.009 LIB libspdk_blob_bdev.a 00:04:20.009 SYMLINK libspdk_accel_iaa.so 00:04:20.009 SO libspdk_blob_bdev.so.11.0 00:04:20.009 SYMLINK libspdk_accel_dsa.so 00:04:20.009 SYMLINK libspdk_blob_bdev.so 00:04:20.268 LIB libspdk_vfu_device.a 00:04:20.268 SO libspdk_vfu_device.so.3.0 00:04:20.268 CC module/bdev/nvme/bdev_nvme.o 00:04:20.268 CC module/blobfs/bdev/blobfs_bdev.o 00:04:20.268 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:20.268 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:20.268 CC module/bdev/lvol/vbdev_lvol.o 00:04:20.268 CC module/bdev/null/bdev_null.o 00:04:20.268 CC module/bdev/split/vbdev_split.o 00:04:20.268 CC module/bdev/malloc/bdev_malloc.o 00:04:20.268 CC module/bdev/iscsi/bdev_iscsi.o 00:04:20.268 CC module/bdev/split/vbdev_split_rpc.o 00:04:20.268 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:20.268 CC module/bdev/null/bdev_null_rpc.o 00:04:20.268 CC module/bdev/error/vbdev_error.o 00:04:20.268 CC module/bdev/nvme/nvme_rpc.o 00:04:20.268 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:20.268 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:20.268 CC module/bdev/nvme/bdev_mdns_client.o 00:04:20.268 CC module/bdev/passthru/vbdev_passthru.o 00:04:20.268 CC module/bdev/gpt/gpt.o 00:04:20.268 CC module/bdev/error/vbdev_error_rpc.o 00:04:20.268 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:20.268 CC module/bdev/gpt/vbdev_gpt.o 00:04:20.268 CC module/bdev/nvme/vbdev_opal.o 00:04:20.268 CC module/bdev/aio/bdev_aio.o 00:04:20.268 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:20.268 CC module/bdev/delay/vbdev_delay.o 00:04:20.268 CC module/bdev/aio/bdev_aio_rpc.o 00:04:20.268 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:20.268 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:20.268 CC module/bdev/ftl/bdev_ftl.o 00:04:20.268 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:20.268 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:20.268 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:20.268 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:20.268 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:20.268 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:20.268 CC module/bdev/raid/bdev_raid.o 00:04:20.268 CC module/bdev/raid/bdev_raid_rpc.o 00:04:20.268 CC module/bdev/raid/bdev_raid_sb.o 00:04:20.268 CC module/bdev/raid/raid0.o 00:04:20.268 CC module/bdev/raid/raid1.o 00:04:20.268 CC module/bdev/raid/concat.o 00:04:20.527 SYMLINK libspdk_vfu_device.so 00:04:20.527 LIB libspdk_sock_posix.a 00:04:20.527 SO libspdk_sock_posix.so.6.0 00:04:20.527 LIB libspdk_bdev_split.a 00:04:20.786 SO libspdk_bdev_split.so.6.0 00:04:20.786 LIB libspdk_blobfs_bdev.a 00:04:20.786 SYMLINK libspdk_sock_posix.so 00:04:20.786 SYMLINK libspdk_bdev_split.so 00:04:20.786 SO libspdk_blobfs_bdev.so.6.0 00:04:20.786 LIB libspdk_bdev_error.a 00:04:20.786 LIB libspdk_bdev_zone_block.a 00:04:20.786 LIB libspdk_bdev_gpt.a 00:04:20.786 SO libspdk_bdev_error.so.6.0 00:04:20.786 LIB libspdk_bdev_ftl.a 00:04:20.786 SYMLINK libspdk_blobfs_bdev.so 00:04:20.786 LIB libspdk_bdev_passthru.a 00:04:20.786 LIB libspdk_bdev_null.a 00:04:20.786 SO libspdk_bdev_zone_block.so.6.0 00:04:20.786 SO libspdk_bdev_ftl.so.6.0 00:04:20.786 SO libspdk_bdev_gpt.so.6.0 00:04:20.786 LIB libspdk_bdev_aio.a 00:04:20.786 SO libspdk_bdev_passthru.so.6.0 00:04:20.786 SO libspdk_bdev_null.so.6.0 00:04:20.786 SYMLINK libspdk_bdev_error.so 00:04:20.786 SO libspdk_bdev_aio.so.6.0 00:04:20.786 LIB libspdk_bdev_malloc.a 00:04:20.786 SYMLINK libspdk_bdev_zone_block.so 00:04:20.786 SYMLINK libspdk_bdev_gpt.so 00:04:21.044 SYMLINK libspdk_bdev_ftl.so 00:04:21.044 SYMLINK libspdk_bdev_passthru.so 00:04:21.044 LIB libspdk_bdev_iscsi.a 00:04:21.044 LIB libspdk_bdev_delay.a 00:04:21.044 SO libspdk_bdev_malloc.so.6.0 00:04:21.044 SYMLINK libspdk_bdev_null.so 00:04:21.044 SYMLINK libspdk_bdev_aio.so 00:04:21.044 SO libspdk_bdev_iscsi.so.6.0 00:04:21.044 SO libspdk_bdev_delay.so.6.0 00:04:21.044 SYMLINK libspdk_bdev_malloc.so 00:04:21.044 SYMLINK libspdk_bdev_delay.so 00:04:21.044 SYMLINK libspdk_bdev_iscsi.so 00:04:21.044 LIB libspdk_bdev_virtio.a 00:04:21.044 SO libspdk_bdev_virtio.so.6.0 00:04:21.044 LIB libspdk_bdev_lvol.a 00:04:21.044 SO libspdk_bdev_lvol.so.6.0 00:04:21.044 SYMLINK libspdk_bdev_virtio.so 00:04:21.302 SYMLINK libspdk_bdev_lvol.so 00:04:21.561 LIB libspdk_bdev_raid.a 00:04:21.561 SO libspdk_bdev_raid.so.6.0 00:04:21.561 SYMLINK libspdk_bdev_raid.so 00:04:22.497 LIB libspdk_bdev_nvme.a 00:04:22.756 SO libspdk_bdev_nvme.so.7.0 00:04:22.756 SYMLINK libspdk_bdev_nvme.so 00:04:23.014 CC module/event/subsystems/sock/sock.o 00:04:23.014 CC module/event/subsystems/iobuf/iobuf.o 00:04:23.014 CC module/event/subsystems/keyring/keyring.o 00:04:23.014 CC module/event/subsystems/scheduler/scheduler.o 00:04:23.014 CC module/event/subsystems/vmd/vmd.o 00:04:23.014 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:23.014 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:23.014 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:23.014 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:23.272 LIB libspdk_event_sock.a 00:04:23.272 LIB libspdk_event_keyring.a 00:04:23.272 LIB libspdk_event_scheduler.a 00:04:23.272 LIB libspdk_event_vfu_tgt.a 00:04:23.272 LIB libspdk_event_vhost_blk.a 00:04:23.272 LIB libspdk_event_vmd.a 00:04:23.272 SO libspdk_event_sock.so.5.0 00:04:23.272 SO libspdk_event_keyring.so.1.0 00:04:23.272 SO libspdk_event_scheduler.so.4.0 00:04:23.272 LIB libspdk_event_iobuf.a 00:04:23.272 SO libspdk_event_vfu_tgt.so.3.0 00:04:23.272 SO libspdk_event_vhost_blk.so.3.0 00:04:23.272 SO libspdk_event_vmd.so.6.0 00:04:23.272 SO libspdk_event_iobuf.so.3.0 00:04:23.272 SYMLINK libspdk_event_sock.so 00:04:23.272 SYMLINK libspdk_event_keyring.so 00:04:23.272 SYMLINK libspdk_event_scheduler.so 00:04:23.272 SYMLINK libspdk_event_vhost_blk.so 00:04:23.272 SYMLINK libspdk_event_vfu_tgt.so 00:04:23.272 SYMLINK libspdk_event_vmd.so 00:04:23.272 SYMLINK libspdk_event_iobuf.so 00:04:23.530 CC module/event/subsystems/accel/accel.o 00:04:23.788 LIB libspdk_event_accel.a 00:04:23.788 SO libspdk_event_accel.so.6.0 00:04:23.788 SYMLINK libspdk_event_accel.so 00:04:24.046 CC module/event/subsystems/bdev/bdev.o 00:04:24.046 LIB libspdk_event_bdev.a 00:04:24.046 SO libspdk_event_bdev.so.6.0 00:04:24.305 SYMLINK libspdk_event_bdev.so 00:04:24.305 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:24.305 CC module/event/subsystems/scsi/scsi.o 00:04:24.305 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:24.305 CC module/event/subsystems/ublk/ublk.o 00:04:24.305 CC module/event/subsystems/nbd/nbd.o 00:04:24.562 LIB libspdk_event_nbd.a 00:04:24.562 LIB libspdk_event_ublk.a 00:04:24.562 LIB libspdk_event_scsi.a 00:04:24.562 SO libspdk_event_nbd.so.6.0 00:04:24.562 SO libspdk_event_ublk.so.3.0 00:04:24.562 SO libspdk_event_scsi.so.6.0 00:04:24.562 SYMLINK libspdk_event_nbd.so 00:04:24.562 SYMLINK libspdk_event_ublk.so 00:04:24.562 SYMLINK libspdk_event_scsi.so 00:04:24.562 LIB libspdk_event_nvmf.a 00:04:24.562 SO libspdk_event_nvmf.so.6.0 00:04:24.820 SYMLINK libspdk_event_nvmf.so 00:04:24.820 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:24.820 CC module/event/subsystems/iscsi/iscsi.o 00:04:24.820 LIB libspdk_event_vhost_scsi.a 00:04:24.820 LIB libspdk_event_iscsi.a 00:04:24.820 SO libspdk_event_vhost_scsi.so.3.0 00:04:25.078 SO libspdk_event_iscsi.so.6.0 00:04:25.078 SYMLINK libspdk_event_vhost_scsi.so 00:04:25.078 SYMLINK libspdk_event_iscsi.so 00:04:25.078 SO libspdk.so.6.0 00:04:25.078 SYMLINK libspdk.so 00:04:25.340 CXX app/trace/trace.o 00:04:25.340 CC app/trace_record/trace_record.o 00:04:25.340 TEST_HEADER include/spdk/accel.h 00:04:25.340 TEST_HEADER include/spdk/accel_module.h 00:04:25.340 CC app/spdk_nvme_identify/identify.o 00:04:25.340 CC app/spdk_nvme_perf/perf.o 00:04:25.340 TEST_HEADER include/spdk/assert.h 00:04:25.340 CC app/spdk_top/spdk_top.o 00:04:25.340 CC test/rpc_client/rpc_client_test.o 00:04:25.340 TEST_HEADER include/spdk/barrier.h 00:04:25.340 CC app/spdk_nvme_discover/discovery_aer.o 00:04:25.340 CC app/spdk_lspci/spdk_lspci.o 00:04:25.340 TEST_HEADER include/spdk/base64.h 00:04:25.340 TEST_HEADER include/spdk/bdev.h 00:04:25.340 TEST_HEADER include/spdk/bdev_module.h 00:04:25.340 TEST_HEADER include/spdk/bdev_zone.h 00:04:25.340 TEST_HEADER include/spdk/bit_array.h 00:04:25.340 TEST_HEADER include/spdk/bit_pool.h 00:04:25.340 TEST_HEADER include/spdk/blob_bdev.h 00:04:25.340 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:25.340 TEST_HEADER include/spdk/blobfs.h 00:04:25.340 TEST_HEADER include/spdk/blob.h 00:04:25.340 TEST_HEADER include/spdk/conf.h 00:04:25.340 TEST_HEADER include/spdk/config.h 00:04:25.340 TEST_HEADER include/spdk/cpuset.h 00:04:25.340 TEST_HEADER include/spdk/crc16.h 00:04:25.340 CC app/spdk_dd/spdk_dd.o 00:04:25.340 TEST_HEADER include/spdk/crc32.h 00:04:25.340 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:25.340 TEST_HEADER include/spdk/crc64.h 00:04:25.340 TEST_HEADER include/spdk/dif.h 00:04:25.340 TEST_HEADER include/spdk/dma.h 00:04:25.340 TEST_HEADER include/spdk/endian.h 00:04:25.340 TEST_HEADER include/spdk/env_dpdk.h 00:04:25.340 TEST_HEADER include/spdk/env.h 00:04:25.340 TEST_HEADER include/spdk/event.h 00:04:25.340 CC app/iscsi_tgt/iscsi_tgt.o 00:04:25.340 CC app/nvmf_tgt/nvmf_main.o 00:04:25.340 TEST_HEADER include/spdk/fd_group.h 00:04:25.340 TEST_HEADER include/spdk/fd.h 00:04:25.340 CC app/vhost/vhost.o 00:04:25.340 TEST_HEADER include/spdk/file.h 00:04:25.340 TEST_HEADER include/spdk/ftl.h 00:04:25.340 TEST_HEADER include/spdk/gpt_spec.h 00:04:25.340 TEST_HEADER include/spdk/hexlify.h 00:04:25.601 TEST_HEADER include/spdk/histogram_data.h 00:04:25.601 TEST_HEADER include/spdk/idxd.h 00:04:25.601 TEST_HEADER include/spdk/idxd_spec.h 00:04:25.601 TEST_HEADER include/spdk/init.h 00:04:25.601 CC app/spdk_tgt/spdk_tgt.o 00:04:25.601 TEST_HEADER include/spdk/ioat.h 00:04:25.601 CC examples/util/zipf/zipf.o 00:04:25.601 TEST_HEADER include/spdk/ioat_spec.h 00:04:25.601 CC test/event/reactor/reactor.o 00:04:25.601 CC test/app/stub/stub.o 00:04:25.601 CC examples/nvme/hello_world/hello_world.o 00:04:25.601 TEST_HEADER include/spdk/iscsi_spec.h 00:04:25.601 CC test/event/reactor_perf/reactor_perf.o 00:04:25.601 CC test/event/event_perf/event_perf.o 00:04:25.601 CC test/nvme/aer/aer.o 00:04:25.601 CC test/app/histogram_perf/histogram_perf.o 00:04:25.601 TEST_HEADER include/spdk/json.h 00:04:25.601 CC test/app/jsoncat/jsoncat.o 00:04:25.601 CC examples/nvme/reconnect/reconnect.o 00:04:25.601 CC examples/idxd/perf/perf.o 00:04:25.601 TEST_HEADER include/spdk/jsonrpc.h 00:04:25.601 TEST_HEADER include/spdk/keyring.h 00:04:25.601 CC test/env/vtophys/vtophys.o 00:04:25.601 CC examples/ioat/perf/perf.o 00:04:25.601 TEST_HEADER include/spdk/keyring_module.h 00:04:25.601 CC app/fio/nvme/fio_plugin.o 00:04:25.601 CC examples/accel/perf/accel_perf.o 00:04:25.601 TEST_HEADER include/spdk/likely.h 00:04:25.601 TEST_HEADER include/spdk/log.h 00:04:25.601 CC examples/sock/hello_world/hello_sock.o 00:04:25.601 CC examples/vmd/lsvmd/lsvmd.o 00:04:25.601 CC test/thread/poller_perf/poller_perf.o 00:04:25.601 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:25.601 TEST_HEADER include/spdk/lvol.h 00:04:25.601 TEST_HEADER include/spdk/memory.h 00:04:25.601 TEST_HEADER include/spdk/mmio.h 00:04:25.601 CC test/event/app_repeat/app_repeat.o 00:04:25.601 TEST_HEADER include/spdk/nbd.h 00:04:25.602 TEST_HEADER include/spdk/notify.h 00:04:25.602 TEST_HEADER include/spdk/nvme.h 00:04:25.602 TEST_HEADER include/spdk/nvme_intel.h 00:04:25.602 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:25.602 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:25.602 TEST_HEADER include/spdk/nvme_spec.h 00:04:25.602 CC examples/blob/cli/blobcli.o 00:04:25.602 TEST_HEADER include/spdk/nvme_zns.h 00:04:25.602 CC test/blobfs/mkfs/mkfs.o 00:04:25.602 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:25.602 CC test/accel/dif/dif.o 00:04:25.602 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:25.602 TEST_HEADER include/spdk/nvmf.h 00:04:25.602 CC test/bdev/bdevio/bdevio.o 00:04:25.602 CC test/app/bdev_svc/bdev_svc.o 00:04:25.602 TEST_HEADER include/spdk/nvmf_spec.h 00:04:25.602 CC examples/blob/hello_world/hello_blob.o 00:04:25.602 TEST_HEADER include/spdk/nvmf_transport.h 00:04:25.602 CC examples/thread/thread/thread_ex.o 00:04:25.602 TEST_HEADER include/spdk/opal.h 00:04:25.602 CC examples/nvmf/nvmf/nvmf.o 00:04:25.602 CC examples/bdev/hello_world/hello_bdev.o 00:04:25.602 CC test/dma/test_dma/test_dma.o 00:04:25.602 TEST_HEADER include/spdk/opal_spec.h 00:04:25.602 TEST_HEADER include/spdk/pci_ids.h 00:04:25.602 TEST_HEADER include/spdk/pipe.h 00:04:25.602 TEST_HEADER include/spdk/queue.h 00:04:25.602 TEST_HEADER include/spdk/reduce.h 00:04:25.602 TEST_HEADER include/spdk/rpc.h 00:04:25.602 TEST_HEADER include/spdk/scheduler.h 00:04:25.602 TEST_HEADER include/spdk/scsi.h 00:04:25.602 TEST_HEADER include/spdk/scsi_spec.h 00:04:25.602 TEST_HEADER include/spdk/sock.h 00:04:25.602 TEST_HEADER include/spdk/stdinc.h 00:04:25.602 CC test/lvol/esnap/esnap.o 00:04:25.602 TEST_HEADER include/spdk/string.h 00:04:25.602 TEST_HEADER include/spdk/thread.h 00:04:25.602 CC test/env/mem_callbacks/mem_callbacks.o 00:04:25.602 TEST_HEADER include/spdk/trace.h 00:04:25.602 TEST_HEADER include/spdk/trace_parser.h 00:04:25.602 LINK spdk_lspci 00:04:25.602 TEST_HEADER include/spdk/tree.h 00:04:25.602 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:25.602 TEST_HEADER include/spdk/ublk.h 00:04:25.602 TEST_HEADER include/spdk/util.h 00:04:25.602 TEST_HEADER include/spdk/uuid.h 00:04:25.602 TEST_HEADER include/spdk/version.h 00:04:25.602 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:25.602 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:25.602 TEST_HEADER include/spdk/vhost.h 00:04:25.602 TEST_HEADER include/spdk/vmd.h 00:04:25.865 TEST_HEADER include/spdk/xor.h 00:04:25.865 TEST_HEADER include/spdk/zipf.h 00:04:25.865 CXX test/cpp_headers/accel.o 00:04:25.865 LINK rpc_client_test 00:04:25.865 LINK spdk_nvme_discover 00:04:25.865 LINK reactor 00:04:25.865 LINK jsoncat 00:04:25.865 LINK interrupt_tgt 00:04:25.865 LINK reactor_perf 00:04:25.865 LINK histogram_perf 00:04:25.865 LINK event_perf 00:04:25.865 LINK vtophys 00:04:25.865 LINK lsvmd 00:04:25.865 LINK poller_perf 00:04:25.865 LINK zipf 00:04:25.865 LINK env_dpdk_post_init 00:04:25.865 LINK vhost 00:04:25.865 LINK app_repeat 00:04:25.865 LINK nvmf_tgt 00:04:25.865 LINK spdk_trace_record 00:04:25.865 LINK stub 00:04:25.865 LINK iscsi_tgt 00:04:25.865 LINK spdk_tgt 00:04:25.865 LINK bdev_svc 00:04:25.865 LINK ioat_perf 00:04:25.865 LINK hello_world 00:04:25.865 LINK mkfs 00:04:26.126 LINK hello_sock 00:04:26.127 LINK hello_blob 00:04:26.127 LINK aer 00:04:26.127 LINK hello_bdev 00:04:26.127 LINK thread 00:04:26.127 LINK spdk_dd 00:04:26.127 CXX test/cpp_headers/accel_module.o 00:04:26.127 CXX test/cpp_headers/assert.o 00:04:26.127 CC test/env/memory/memory_ut.o 00:04:26.127 LINK nvmf 00:04:26.127 LINK idxd_perf 00:04:26.127 LINK reconnect 00:04:26.127 LINK spdk_trace 00:04:26.390 CC examples/ioat/verify/verify.o 00:04:26.390 LINK dif 00:04:26.390 LINK bdevio 00:04:26.390 LINK test_dma 00:04:26.390 CC test/env/pci/pci_ut.o 00:04:26.390 CC app/fio/bdev/fio_plugin.o 00:04:26.390 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:26.390 CC examples/vmd/led/led.o 00:04:26.390 CC test/nvme/reset/reset.o 00:04:26.390 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:26.390 CC test/nvme/sgl/sgl.o 00:04:26.390 CXX test/cpp_headers/barrier.o 00:04:26.390 CC test/nvme/e2edp/nvme_dp.o 00:04:26.390 CC test/event/scheduler/scheduler.o 00:04:26.390 LINK accel_perf 00:04:26.390 CXX test/cpp_headers/base64.o 00:04:26.390 CC test/nvme/overhead/overhead.o 00:04:26.390 CXX test/cpp_headers/bdev.o 00:04:26.390 CC test/nvme/startup/startup.o 00:04:26.390 CC test/nvme/err_injection/err_injection.o 00:04:26.654 CXX test/cpp_headers/bdev_module.o 00:04:26.654 CC examples/bdev/bdevperf/bdevperf.o 00:04:26.654 CC examples/nvme/arbitration/arbitration.o 00:04:26.654 CXX test/cpp_headers/bdev_zone.o 00:04:26.654 CXX test/cpp_headers/bit_array.o 00:04:26.654 CC examples/nvme/hotplug/hotplug.o 00:04:26.654 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:26.654 CXX test/cpp_headers/bit_pool.o 00:04:26.654 CC examples/nvme/abort/abort.o 00:04:26.654 LINK blobcli 00:04:26.654 LINK nvme_fuzz 00:04:26.654 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:26.654 CXX test/cpp_headers/blob_bdev.o 00:04:26.654 CXX test/cpp_headers/blobfs_bdev.o 00:04:26.654 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:26.654 CXX test/cpp_headers/blobfs.o 00:04:26.654 CC test/nvme/reserve/reserve.o 00:04:26.654 LINK spdk_nvme 00:04:26.654 LINK verify 00:04:26.654 CXX test/cpp_headers/blob.o 00:04:26.654 LINK led 00:04:26.654 CC test/nvme/simple_copy/simple_copy.o 00:04:26.915 CC test/nvme/boot_partition/boot_partition.o 00:04:26.915 CC test/nvme/connect_stress/connect_stress.o 00:04:26.915 CXX test/cpp_headers/conf.o 00:04:26.915 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:26.915 LINK mem_callbacks 00:04:26.915 LINK startup 00:04:26.915 CXX test/cpp_headers/config.o 00:04:26.915 CXX test/cpp_headers/cpuset.o 00:04:26.915 LINK err_injection 00:04:26.915 LINK reset 00:04:26.915 LINK scheduler 00:04:26.915 CXX test/cpp_headers/crc16.o 00:04:26.915 LINK spdk_nvme_perf 00:04:26.915 CXX test/cpp_headers/crc32.o 00:04:26.915 CXX test/cpp_headers/crc64.o 00:04:26.915 CXX test/cpp_headers/dif.o 00:04:26.915 LINK cmb_copy 00:04:26.915 CXX test/cpp_headers/dma.o 00:04:26.915 CXX test/cpp_headers/endian.o 00:04:26.916 CXX test/cpp_headers/env_dpdk.o 00:04:26.916 LINK pmr_persistence 00:04:26.916 CC test/nvme/compliance/nvme_compliance.o 00:04:26.916 CC test/nvme/fused_ordering/fused_ordering.o 00:04:26.916 LINK sgl 00:04:27.205 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:27.205 CXX test/cpp_headers/env.o 00:04:27.205 LINK nvme_dp 00:04:27.205 CXX test/cpp_headers/event.o 00:04:27.205 CXX test/cpp_headers/fd_group.o 00:04:27.205 CC test/nvme/fdp/fdp.o 00:04:27.205 CXX test/cpp_headers/fd.o 00:04:27.205 LINK hotplug 00:04:27.205 CC test/nvme/cuse/cuse.o 00:04:27.205 LINK overhead 00:04:27.205 CXX test/cpp_headers/file.o 00:04:27.205 CXX test/cpp_headers/ftl.o 00:04:27.205 LINK reserve 00:04:27.205 CXX test/cpp_headers/gpt_spec.o 00:04:27.205 LINK spdk_nvme_identify 00:04:27.205 CXX test/cpp_headers/hexlify.o 00:04:27.205 LINK boot_partition 00:04:27.205 LINK spdk_top 00:04:27.205 LINK connect_stress 00:04:27.205 LINK pci_ut 00:04:27.205 CXX test/cpp_headers/histogram_data.o 00:04:27.205 LINK arbitration 00:04:27.205 LINK simple_copy 00:04:27.205 CXX test/cpp_headers/idxd.o 00:04:27.205 CXX test/cpp_headers/idxd_spec.o 00:04:27.205 CXX test/cpp_headers/init.o 00:04:27.465 CXX test/cpp_headers/ioat.o 00:04:27.465 CXX test/cpp_headers/ioat_spec.o 00:04:27.465 LINK abort 00:04:27.465 CXX test/cpp_headers/iscsi_spec.o 00:04:27.465 CXX test/cpp_headers/json.o 00:04:27.465 CXX test/cpp_headers/jsonrpc.o 00:04:27.465 LINK spdk_bdev 00:04:27.465 CXX test/cpp_headers/keyring.o 00:04:27.465 CXX test/cpp_headers/keyring_module.o 00:04:27.465 CXX test/cpp_headers/likely.o 00:04:27.465 LINK nvme_manage 00:04:27.465 CXX test/cpp_headers/log.o 00:04:27.465 CXX test/cpp_headers/lvol.o 00:04:27.465 LINK doorbell_aers 00:04:27.465 CXX test/cpp_headers/memory.o 00:04:27.465 CXX test/cpp_headers/mmio.o 00:04:27.465 CXX test/cpp_headers/nbd.o 00:04:27.465 CXX test/cpp_headers/notify.o 00:04:27.465 CXX test/cpp_headers/nvme.o 00:04:27.465 CXX test/cpp_headers/nvme_intel.o 00:04:27.465 CXX test/cpp_headers/nvme_ocssd.o 00:04:27.465 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:27.465 CXX test/cpp_headers/nvme_spec.o 00:04:27.465 CXX test/cpp_headers/nvme_zns.o 00:04:27.465 CXX test/cpp_headers/nvmf_cmd.o 00:04:27.465 LINK fused_ordering 00:04:27.465 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:27.465 CXX test/cpp_headers/nvmf.o 00:04:27.465 CXX test/cpp_headers/nvmf_spec.o 00:04:27.465 CXX test/cpp_headers/nvmf_transport.o 00:04:27.465 CXX test/cpp_headers/opal.o 00:04:27.726 CXX test/cpp_headers/opal_spec.o 00:04:27.726 CXX test/cpp_headers/pci_ids.o 00:04:27.726 CXX test/cpp_headers/pipe.o 00:04:27.726 CXX test/cpp_headers/queue.o 00:04:27.726 CXX test/cpp_headers/reduce.o 00:04:27.726 CXX test/cpp_headers/rpc.o 00:04:27.726 CXX test/cpp_headers/scheduler.o 00:04:27.726 CXX test/cpp_headers/scsi.o 00:04:27.726 CXX test/cpp_headers/scsi_spec.o 00:04:27.726 CXX test/cpp_headers/stdinc.o 00:04:27.726 CXX test/cpp_headers/sock.o 00:04:27.726 CXX test/cpp_headers/string.o 00:04:27.726 CXX test/cpp_headers/thread.o 00:04:27.726 LINK nvme_compliance 00:04:27.726 LINK vhost_fuzz 00:04:27.726 CXX test/cpp_headers/trace_parser.o 00:04:27.726 CXX test/cpp_headers/trace.o 00:04:27.726 CXX test/cpp_headers/tree.o 00:04:27.726 LINK fdp 00:04:27.726 CXX test/cpp_headers/ublk.o 00:04:27.726 CXX test/cpp_headers/util.o 00:04:27.726 CXX test/cpp_headers/uuid.o 00:04:27.726 CXX test/cpp_headers/version.o 00:04:27.726 CXX test/cpp_headers/vfio_user_pci.o 00:04:27.726 CXX test/cpp_headers/vfio_user_spec.o 00:04:27.726 CXX test/cpp_headers/vhost.o 00:04:27.726 CXX test/cpp_headers/vmd.o 00:04:27.726 CXX test/cpp_headers/xor.o 00:04:27.983 CXX test/cpp_headers/zipf.o 00:04:27.983 LINK bdevperf 00:04:27.983 LINK memory_ut 00:04:28.548 LINK cuse 00:04:28.806 LINK iscsi_fuzz 00:04:31.366 LINK esnap 00:04:31.624 00:04:31.624 real 0m40.345s 00:04:31.624 user 7m39.503s 00:04:31.624 sys 1m52.738s 00:04:31.624 15:22:44 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:31.624 15:22:44 make -- common/autotest_common.sh@10 -- $ set +x 00:04:31.624 ************************************ 00:04:31.624 END TEST make 00:04:31.624 ************************************ 00:04:31.624 15:22:44 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:31.624 15:22:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:31.624 15:22:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:31.624 15:22:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:31.624 15:22:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:31.624 15:22:44 -- pm/common@44 -- $ pid=1064607 00:04:31.624 15:22:44 -- pm/common@50 -- $ kill -TERM 1064607 00:04:31.624 15:22:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:31.624 15:22:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:31.624 15:22:44 -- pm/common@44 -- $ pid=1064609 00:04:31.624 15:22:44 -- pm/common@50 -- $ kill -TERM 1064609 00:04:31.624 15:22:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:31.624 15:22:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:31.624 15:22:44 -- pm/common@44 -- $ pid=1064611 00:04:31.624 15:22:44 -- pm/common@50 -- $ kill -TERM 1064611 00:04:31.624 15:22:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:31.624 15:22:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:31.624 15:22:44 -- pm/common@44 -- $ pid=1064645 00:04:31.624 15:22:44 -- pm/common@50 -- $ sudo -E kill -TERM 1064645 00:04:31.881 15:22:44 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:31.881 15:22:44 -- nvmf/common.sh@7 -- # uname -s 00:04:31.881 15:22:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:31.881 15:22:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:31.881 15:22:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:31.881 15:22:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:31.882 15:22:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:31.882 15:22:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:31.882 15:22:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:31.882 15:22:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:31.882 15:22:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:31.882 15:22:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:31.882 15:22:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:31.882 15:22:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:31.882 15:22:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:31.882 15:22:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:31.882 15:22:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:31.882 15:22:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:31.882 15:22:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:31.882 15:22:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:31.882 15:22:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:31.882 15:22:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:31.882 15:22:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.882 15:22:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.882 15:22:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.882 15:22:44 -- paths/export.sh@5 -- # export PATH 00:04:31.882 15:22:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.882 15:22:44 -- nvmf/common.sh@47 -- # : 0 00:04:31.882 15:22:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:31.882 15:22:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:31.882 15:22:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:31.882 15:22:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:31.882 15:22:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:31.882 15:22:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:31.882 15:22:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:31.882 15:22:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:31.882 15:22:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:31.882 15:22:44 -- spdk/autotest.sh@32 -- # uname -s 00:04:31.882 15:22:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:31.882 15:22:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:31.882 15:22:44 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:31.882 15:22:44 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:31.882 15:22:44 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:31.882 15:22:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:31.882 15:22:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:31.882 15:22:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:31.882 15:22:44 -- spdk/autotest.sh@48 -- # udevadm_pid=1141903 00:04:31.882 15:22:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:31.882 15:22:44 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:31.882 15:22:44 -- pm/common@17 -- # local monitor 00:04:31.882 15:22:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:31.882 15:22:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:31.882 15:22:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:31.882 15:22:44 -- pm/common@21 -- # date +%s 00:04:31.882 15:22:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:31.882 15:22:44 -- pm/common@21 -- # date +%s 00:04:31.882 15:22:44 -- pm/common@25 -- # sleep 1 00:04:31.882 15:22:44 -- pm/common@21 -- # date +%s 00:04:31.882 15:22:44 -- pm/common@21 -- # date +%s 00:04:31.882 15:22:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715779364 00:04:31.882 15:22:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715779364 00:04:31.882 15:22:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715779364 00:04:31.882 15:22:44 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715779364 00:04:31.882 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715779364_collect-vmstat.pm.log 00:04:31.882 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715779364_collect-cpu-load.pm.log 00:04:31.882 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715779364_collect-cpu-temp.pm.log 00:04:31.882 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715779364_collect-bmc-pm.bmc.pm.log 00:04:32.814 15:22:45 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:32.814 15:22:45 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:32.814 15:22:45 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:32.814 15:22:45 -- common/autotest_common.sh@10 -- # set +x 00:04:32.814 15:22:45 -- spdk/autotest.sh@59 -- # create_test_list 00:04:32.814 15:22:45 -- common/autotest_common.sh@744 -- # xtrace_disable 00:04:32.814 15:22:45 -- common/autotest_common.sh@10 -- # set +x 00:04:32.814 15:22:45 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:32.814 15:22:45 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:32.814 15:22:45 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:32.814 15:22:45 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:32.814 15:22:45 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:32.814 15:22:45 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:32.814 15:22:45 -- common/autotest_common.sh@1451 -- # uname 00:04:32.814 15:22:45 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:04:32.814 15:22:45 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:32.814 15:22:45 -- common/autotest_common.sh@1471 -- # uname 00:04:32.814 15:22:45 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:04:32.814 15:22:45 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:32.814 15:22:45 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:32.814 15:22:45 -- spdk/autotest.sh@72 -- # hash lcov 00:04:32.814 15:22:45 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:32.814 15:22:45 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:32.814 --rc lcov_branch_coverage=1 00:04:32.814 --rc lcov_function_coverage=1 00:04:32.814 --rc genhtml_branch_coverage=1 00:04:32.814 --rc genhtml_function_coverage=1 00:04:32.814 --rc genhtml_legend=1 00:04:32.814 --rc geninfo_all_blocks=1 00:04:32.814 ' 00:04:32.814 15:22:45 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:32.814 --rc lcov_branch_coverage=1 00:04:32.814 --rc lcov_function_coverage=1 00:04:32.814 --rc genhtml_branch_coverage=1 00:04:32.814 --rc genhtml_function_coverage=1 00:04:32.814 --rc genhtml_legend=1 00:04:32.814 --rc geninfo_all_blocks=1 00:04:32.814 ' 00:04:32.814 15:22:45 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:32.814 --rc lcov_branch_coverage=1 00:04:32.814 --rc lcov_function_coverage=1 00:04:32.814 --rc genhtml_branch_coverage=1 00:04:32.814 --rc genhtml_function_coverage=1 00:04:32.814 --rc genhtml_legend=1 00:04:32.814 --rc geninfo_all_blocks=1 00:04:32.814 --no-external' 00:04:32.814 15:22:45 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:32.814 --rc lcov_branch_coverage=1 00:04:32.814 --rc lcov_function_coverage=1 00:04:32.814 --rc genhtml_branch_coverage=1 00:04:32.814 --rc genhtml_function_coverage=1 00:04:32.814 --rc genhtml_legend=1 00:04:32.814 --rc geninfo_all_blocks=1 00:04:32.814 --no-external' 00:04:32.814 15:22:45 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:32.814 lcov: LCOV version 1.14 00:04:32.814 15:22:45 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:45.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:45.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:46.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:46.901 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:46.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:46.901 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:46.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:46.901 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:04.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:04.977 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:05:04.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:04.977 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:05:04.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:04.977 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:05:04.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:04.977 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:05:04.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:04.977 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:05:04.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:04.977 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:05:04.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:04.977 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:05:04.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:04.977 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:05:04.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:04.977 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:05:04.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:04.977 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:05:04.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:04.977 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:05:04.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:04.977 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:04.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:05:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:04.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:05:04.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:04.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:05:04.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:04.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:05:04.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:04.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:05:04.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:04.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:05:04.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:04.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:05:04.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:04.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:05:04.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:04.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:05:04.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:05:04.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:05:04.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:04.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:05:04.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:04.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:05:04.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:04.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:05:04.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:04.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:05:04.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:04.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:05:04.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:05:04.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:05:04.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:04.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:05:04.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:05:04.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:05:04.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:04.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:04.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:04.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:05:04.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:04.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:04.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:04.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:05:04.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:04.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:05:04.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:04.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:05:05.913 15:23:18 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:05.913 15:23:18 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:05.913 15:23:18 -- common/autotest_common.sh@10 -- # set +x 00:05:05.913 15:23:18 -- spdk/autotest.sh@91 -- # rm -f 00:05:05.913 15:23:18 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:07.286 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:07.286 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:07.286 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:07.286 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:07.286 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:07.286 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:07.286 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:07.286 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:07.286 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:05:07.286 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:07.286 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:07.286 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:07.286 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:07.286 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:07.286 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:07.286 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:07.286 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:07.286 15:23:20 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:07.286 15:23:20 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:07.286 15:23:20 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:07.286 15:23:20 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:07.286 15:23:20 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:07.286 15:23:20 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:07.286 15:23:20 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:07.286 15:23:20 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:07.286 15:23:20 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:07.286 15:23:20 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:07.286 15:23:20 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:07.286 15:23:20 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:07.286 15:23:20 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:07.286 15:23:20 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:07.286 15:23:20 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:07.544 No valid GPT data, bailing 00:05:07.544 15:23:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:07.544 15:23:20 -- scripts/common.sh@391 -- # pt= 00:05:07.544 15:23:20 -- scripts/common.sh@392 -- # return 1 00:05:07.544 15:23:20 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:07.544 1+0 records in 00:05:07.544 1+0 records out 00:05:07.544 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0019512 s, 537 MB/s 00:05:07.544 15:23:20 -- spdk/autotest.sh@118 -- # sync 00:05:07.544 15:23:20 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:07.544 15:23:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:07.544 15:23:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:09.473 15:23:22 -- spdk/autotest.sh@124 -- # uname -s 00:05:09.473 15:23:22 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:09.473 15:23:22 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:05:09.473 15:23:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:09.473 15:23:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.473 15:23:22 -- common/autotest_common.sh@10 -- # set +x 00:05:09.473 ************************************ 00:05:09.473 START TEST setup.sh 00:05:09.473 ************************************ 00:05:09.473 15:23:22 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:05:09.473 * Looking for test storage... 00:05:09.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:09.473 15:23:22 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:09.473 15:23:22 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:09.473 15:23:22 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:05:09.474 15:23:22 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:09.474 15:23:22 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.474 15:23:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:09.474 ************************************ 00:05:09.474 START TEST acl 00:05:09.474 ************************************ 00:05:09.474 15:23:22 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:05:09.474 * Looking for test storage... 00:05:09.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:09.474 15:23:22 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:09.474 15:23:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:09.474 15:23:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:09.474 15:23:22 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:09.474 15:23:22 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:09.474 15:23:22 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:09.474 15:23:22 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:09.474 15:23:22 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:09.474 15:23:22 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:09.474 15:23:22 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:09.474 15:23:22 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:09.474 15:23:22 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:09.474 15:23:22 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:09.474 15:23:22 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:09.474 15:23:22 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:09.474 15:23:22 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:10.847 15:23:23 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:10.847 15:23:23 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:10.847 15:23:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.847 15:23:23 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:10.847 15:23:23 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.847 15:23:23 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:12.217 Hugepages 00:05:12.217 node hugesize free / total 00:05:12.217 15:23:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:12.217 15:23:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:12.217 15:23:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.217 00:05:12.217 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:0b:00.0 == *:*:*.* ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.217 15:23:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:05:12.218 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:12.218 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:12.218 15:23:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.218 15:23:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:05:12.218 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:12.218 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:12.218 15:23:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.218 15:23:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:05:12.218 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:12.218 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:12.218 15:23:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.218 15:23:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:05:12.218 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:12.218 15:23:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:12.218 15:23:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.218 15:23:25 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:12.218 15:23:25 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:12.218 15:23:25 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:12.218 15:23:25 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.218 15:23:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:12.218 ************************************ 00:05:12.218 START TEST denied 00:05:12.218 ************************************ 00:05:12.218 15:23:25 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:05:12.218 15:23:25 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:0b:00.0' 00:05:12.218 15:23:25 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:12.218 15:23:25 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:0b:00.0' 00:05:12.218 15:23:25 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.218 15:23:25 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:13.590 0000:0b:00.0 (8086 0a54): Skipping denied controller at 0000:0b:00.0 00:05:13.590 15:23:26 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:0b:00.0 00:05:13.590 15:23:26 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:13.590 15:23:26 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:13.590 15:23:26 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:0b:00.0 ]] 00:05:13.590 15:23:26 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:0b:00.0/driver 00:05:13.590 15:23:26 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:13.590 15:23:26 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:13.590 15:23:26 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:13.590 15:23:26 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:13.590 15:23:26 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:16.121 00:05:16.122 real 0m4.013s 00:05:16.122 user 0m1.238s 00:05:16.122 sys 0m1.948s 00:05:16.122 15:23:29 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.122 15:23:29 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:16.122 ************************************ 00:05:16.122 END TEST denied 00:05:16.122 ************************************ 00:05:16.122 15:23:29 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:16.122 15:23:29 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.122 15:23:29 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.122 15:23:29 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:16.122 ************************************ 00:05:16.122 START TEST allowed 00:05:16.122 ************************************ 00:05:16.122 15:23:29 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:05:16.122 15:23:29 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:0b:00.0 00:05:16.122 15:23:29 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:16.122 15:23:29 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:0b:00.0 .*: nvme -> .*' 00:05:16.122 15:23:29 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.122 15:23:29 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:18.650 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:05:18.650 15:23:31 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:05:18.650 15:23:31 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:18.650 15:23:31 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:18.650 15:23:31 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:18.650 15:23:31 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:20.550 00:05:20.550 real 0m4.201s 00:05:20.550 user 0m1.171s 00:05:20.550 sys 0m2.007s 00:05:20.550 15:23:33 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:20.550 15:23:33 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:20.550 ************************************ 00:05:20.550 END TEST allowed 00:05:20.550 ************************************ 00:05:20.550 00:05:20.550 real 0m11.178s 00:05:20.550 user 0m3.558s 00:05:20.550 sys 0m5.855s 00:05:20.550 15:23:33 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:20.550 15:23:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:20.550 ************************************ 00:05:20.550 END TEST acl 00:05:20.550 ************************************ 00:05:20.550 15:23:33 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:20.550 15:23:33 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:20.550 15:23:33 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:20.550 15:23:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:20.550 ************************************ 00:05:20.550 START TEST hugepages 00:05:20.550 ************************************ 00:05:20.550 15:23:33 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:20.550 * Looking for test storage... 00:05:20.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:20.550 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:20.550 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:20.550 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:20.550 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:20.550 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:20.550 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:20.550 15:23:33 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:20.550 15:23:33 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:20.550 15:23:33 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:20.550 15:23:33 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:20.550 15:23:33 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.550 15:23:33 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.550 15:23:33 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.550 15:23:33 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.550 15:23:33 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.550 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.550 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 34265352 kB' 'MemAvailable: 38992808 kB' 'Buffers: 2696 kB' 'Cached: 19719944 kB' 'SwapCached: 0 kB' 'Active: 15695724 kB' 'Inactive: 4481728 kB' 'Active(anon): 15081444 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458192 kB' 'Mapped: 189512 kB' 'Shmem: 14626632 kB' 'KReclaimable: 250572 kB' 'Slab: 630324 kB' 'SReclaimable: 250572 kB' 'SUnreclaim: 379752 kB' 'KernelStack: 12976 kB' 'PageTables: 9060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562316 kB' 'Committed_AS: 16211464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198332 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:20.551 15:23:33 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:20.551 15:23:33 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:20.551 15:23:33 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:20.551 15:23:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:20.551 ************************************ 00:05:20.551 START TEST default_setup 00:05:20.551 ************************************ 00:05:20.551 15:23:33 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:05:20.551 15:23:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:20.551 15:23:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:20.551 15:23:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:20.551 15:23:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:20.551 15:23:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:20.551 15:23:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:20.551 15:23:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:20.551 15:23:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:20.551 15:23:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:20.552 15:23:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:20.552 15:23:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.552 15:23:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:20.552 15:23:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:20.552 15:23:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.552 15:23:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.552 15:23:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:20.552 15:23:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:20.552 15:23:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:20.552 15:23:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:20.552 15:23:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:20.552 15:23:33 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.552 15:23:33 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:21.924 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:21.924 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:21.924 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:21.924 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:21.924 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:21.924 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:21.924 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:21.924 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:21.924 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:21.924 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:21.924 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:21.924 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:21.924 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:21.924 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:21.924 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:21.924 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:22.858 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36361024 kB' 'MemAvailable: 41088476 kB' 'Buffers: 2696 kB' 'Cached: 19720036 kB' 'SwapCached: 0 kB' 'Active: 15714960 kB' 'Inactive: 4481728 kB' 'Active(anon): 15100680 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477300 kB' 'Mapped: 189596 kB' 'Shmem: 14626724 kB' 'KReclaimable: 250564 kB' 'Slab: 630208 kB' 'SReclaimable: 250564 kB' 'SUnreclaim: 379644 kB' 'KernelStack: 12912 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 16232592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198476 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.127 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36364204 kB' 'MemAvailable: 41091656 kB' 'Buffers: 2696 kB' 'Cached: 19720036 kB' 'SwapCached: 0 kB' 'Active: 15715588 kB' 'Inactive: 4481728 kB' 'Active(anon): 15101308 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477956 kB' 'Mapped: 189596 kB' 'Shmem: 14626724 kB' 'KReclaimable: 250564 kB' 'Slab: 630212 kB' 'SReclaimable: 250564 kB' 'SUnreclaim: 379648 kB' 'KernelStack: 13008 kB' 'PageTables: 8652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 16233732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198540 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.128 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.129 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36364044 kB' 'MemAvailable: 41091496 kB' 'Buffers: 2696 kB' 'Cached: 19720036 kB' 'SwapCached: 0 kB' 'Active: 15714916 kB' 'Inactive: 4481728 kB' 'Active(anon): 15100636 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476824 kB' 'Mapped: 189604 kB' 'Shmem: 14626724 kB' 'KReclaimable: 250564 kB' 'Slab: 630296 kB' 'SReclaimable: 250564 kB' 'SUnreclaim: 379732 kB' 'KernelStack: 12960 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 16233756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198604 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.130 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.131 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:23.132 nr_hugepages=1024 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:23.132 resv_hugepages=0 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:23.132 surplus_hugepages=0 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:23.132 anon_hugepages=0 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36363256 kB' 'MemAvailable: 41090708 kB' 'Buffers: 2696 kB' 'Cached: 19720080 kB' 'SwapCached: 0 kB' 'Active: 15714648 kB' 'Inactive: 4481728 kB' 'Active(anon): 15100368 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476948 kB' 'Mapped: 189560 kB' 'Shmem: 14626768 kB' 'KReclaimable: 250564 kB' 'Slab: 630296 kB' 'SReclaimable: 250564 kB' 'SUnreclaim: 379732 kB' 'KernelStack: 13184 kB' 'PageTables: 9072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 16234280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198620 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.132 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.133 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19644152 kB' 'MemUsed: 13232788 kB' 'SwapCached: 0 kB' 'Active: 8920996 kB' 'Inactive: 1090320 kB' 'Active(anon): 8589440 kB' 'Inactive(anon): 0 kB' 'Active(file): 331556 kB' 'Inactive(file): 1090320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9665684 kB' 'Mapped: 68000 kB' 'AnonPages: 348732 kB' 'Shmem: 8243808 kB' 'KernelStack: 8264 kB' 'PageTables: 7848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138384 kB' 'Slab: 330520 kB' 'SReclaimable: 138384 kB' 'SUnreclaim: 192136 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.134 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:23.135 node0=1024 expecting 1024 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:23.135 00:05:23.135 real 0m2.542s 00:05:23.135 user 0m0.656s 00:05:23.135 sys 0m0.929s 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.135 15:23:36 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:23.135 ************************************ 00:05:23.135 END TEST default_setup 00:05:23.135 ************************************ 00:05:23.135 15:23:36 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:23.135 15:23:36 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.135 15:23:36 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.135 15:23:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:23.394 ************************************ 00:05:23.394 START TEST per_node_1G_alloc 00:05:23.394 ************************************ 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.394 15:23:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:24.771 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:24.771 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:24.771 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:24.771 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:24.771 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:24.771 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:24.771 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:24.771 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:24.771 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:24.771 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:24.771 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:24.771 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:24.771 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:24.771 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:24.771 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:24.771 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:24.771 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:24.771 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:24.771 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:24.771 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:24.771 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:24.771 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:24.771 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:24.771 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:24.771 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:24.771 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:24.771 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:24.771 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:24.771 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:24.771 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:24.771 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.771 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.771 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.771 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.771 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.771 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.771 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.771 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36345912 kB' 'MemAvailable: 41073364 kB' 'Buffers: 2696 kB' 'Cached: 19720156 kB' 'SwapCached: 0 kB' 'Active: 15714204 kB' 'Inactive: 4481728 kB' 'Active(anon): 15099924 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476292 kB' 'Mapped: 189524 kB' 'Shmem: 14626844 kB' 'KReclaimable: 250564 kB' 'Slab: 629888 kB' 'SReclaimable: 250564 kB' 'SUnreclaim: 379324 kB' 'KernelStack: 12944 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 16232144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198444 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.772 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36348508 kB' 'MemAvailable: 41075960 kB' 'Buffers: 2696 kB' 'Cached: 19720156 kB' 'SwapCached: 0 kB' 'Active: 15714940 kB' 'Inactive: 4481728 kB' 'Active(anon): 15100660 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477052 kB' 'Mapped: 189600 kB' 'Shmem: 14626844 kB' 'KReclaimable: 250564 kB' 'Slab: 629876 kB' 'SReclaimable: 250564 kB' 'SUnreclaim: 379312 kB' 'KernelStack: 12976 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 16232160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198396 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.773 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.774 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36349116 kB' 'MemAvailable: 41076568 kB' 'Buffers: 2696 kB' 'Cached: 19720180 kB' 'SwapCached: 0 kB' 'Active: 15714176 kB' 'Inactive: 4481728 kB' 'Active(anon): 15099896 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476228 kB' 'Mapped: 189516 kB' 'Shmem: 14626868 kB' 'KReclaimable: 250564 kB' 'Slab: 629896 kB' 'SReclaimable: 250564 kB' 'SUnreclaim: 379332 kB' 'KernelStack: 12992 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 16232184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198396 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.775 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.776 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:24.777 nr_hugepages=1024 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:24.777 resv_hugepages=0 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:24.777 surplus_hugepages=0 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:24.777 anon_hugepages=0 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36349716 kB' 'MemAvailable: 41077168 kB' 'Buffers: 2696 kB' 'Cached: 19720200 kB' 'SwapCached: 0 kB' 'Active: 15714444 kB' 'Inactive: 4481728 kB' 'Active(anon): 15100164 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476520 kB' 'Mapped: 189516 kB' 'Shmem: 14626888 kB' 'KReclaimable: 250564 kB' 'Slab: 629896 kB' 'SReclaimable: 250564 kB' 'SUnreclaim: 379332 kB' 'KernelStack: 12976 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 16232204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198396 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.777 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.778 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.779 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.779 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.779 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.779 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.779 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.779 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.779 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.779 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:25.039 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20685992 kB' 'MemUsed: 12190948 kB' 'SwapCached: 0 kB' 'Active: 8919852 kB' 'Inactive: 1090320 kB' 'Active(anon): 8588296 kB' 'Inactive(anon): 0 kB' 'Active(file): 331556 kB' 'Inactive(file): 1090320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9665692 kB' 'Mapped: 68012 kB' 'AnonPages: 347640 kB' 'Shmem: 8243816 kB' 'KernelStack: 7848 kB' 'PageTables: 5956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138384 kB' 'Slab: 330292 kB' 'SReclaimable: 138384 kB' 'SUnreclaim: 191908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.040 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 15663476 kB' 'MemUsed: 12001312 kB' 'SwapCached: 0 kB' 'Active: 6794628 kB' 'Inactive: 3391408 kB' 'Active(anon): 6511904 kB' 'Inactive(anon): 0 kB' 'Active(file): 282724 kB' 'Inactive(file): 3391408 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10057248 kB' 'Mapped: 121504 kB' 'AnonPages: 128880 kB' 'Shmem: 6383116 kB' 'KernelStack: 5128 kB' 'PageTables: 2548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 112180 kB' 'Slab: 299604 kB' 'SReclaimable: 112180 kB' 'SUnreclaim: 187424 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.041 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:25.042 node0=512 expecting 512 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:25.042 node1=512 expecting 512 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:25.042 00:05:25.042 real 0m1.695s 00:05:25.042 user 0m0.700s 00:05:25.042 sys 0m0.965s 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.042 15:23:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:25.042 ************************************ 00:05:25.042 END TEST per_node_1G_alloc 00:05:25.042 ************************************ 00:05:25.042 15:23:37 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:25.042 15:23:37 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.042 15:23:37 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.042 15:23:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:25.042 ************************************ 00:05:25.043 START TEST even_2G_alloc 00:05:25.043 ************************************ 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.043 15:23:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:26.420 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:26.420 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:26.420 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:26.420 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:26.420 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:26.420 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:26.420 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:26.420 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:26.420 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:26.420 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:26.420 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:26.420 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:26.420 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:26.420 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:26.420 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:26.420 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:26.420 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36323760 kB' 'MemAvailable: 41051212 kB' 'Buffers: 2696 kB' 'Cached: 19720292 kB' 'SwapCached: 0 kB' 'Active: 15714812 kB' 'Inactive: 4481728 kB' 'Active(anon): 15100532 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476724 kB' 'Mapped: 189624 kB' 'Shmem: 14626980 kB' 'KReclaimable: 250564 kB' 'Slab: 629736 kB' 'SReclaimable: 250564 kB' 'SUnreclaim: 379172 kB' 'KernelStack: 12976 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 16232044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198604 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.420 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36330188 kB' 'MemAvailable: 41057640 kB' 'Buffers: 2696 kB' 'Cached: 19720296 kB' 'SwapCached: 0 kB' 'Active: 15715160 kB' 'Inactive: 4481728 kB' 'Active(anon): 15100880 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477064 kB' 'Mapped: 189624 kB' 'Shmem: 14626984 kB' 'KReclaimable: 250564 kB' 'Slab: 629712 kB' 'SReclaimable: 250564 kB' 'SUnreclaim: 379148 kB' 'KernelStack: 12992 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 16232428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198572 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.421 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36330444 kB' 'MemAvailable: 41057896 kB' 'Buffers: 2696 kB' 'Cached: 19720296 kB' 'SwapCached: 0 kB' 'Active: 15715176 kB' 'Inactive: 4481728 kB' 'Active(anon): 15100896 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477096 kB' 'Mapped: 189560 kB' 'Shmem: 14626984 kB' 'KReclaimable: 250564 kB' 'Slab: 629712 kB' 'SReclaimable: 250564 kB' 'SUnreclaim: 379148 kB' 'KernelStack: 13040 kB' 'PageTables: 8532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 16232448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198572 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:26.422 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:26.423 nr_hugepages=1024 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:26.423 resv_hugepages=0 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:26.423 surplus_hugepages=0 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:26.423 anon_hugepages=0 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36328792 kB' 'MemAvailable: 41056244 kB' 'Buffers: 2696 kB' 'Cached: 19720340 kB' 'SwapCached: 0 kB' 'Active: 15715320 kB' 'Inactive: 4481728 kB' 'Active(anon): 15101040 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477304 kB' 'Mapped: 189560 kB' 'Shmem: 14627028 kB' 'KReclaimable: 250564 kB' 'Slab: 629800 kB' 'SReclaimable: 250564 kB' 'SUnreclaim: 379236 kB' 'KernelStack: 13088 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 16235616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198572 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:26.423 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20671156 kB' 'MemUsed: 12205784 kB' 'SwapCached: 0 kB' 'Active: 8920608 kB' 'Inactive: 1090320 kB' 'Active(anon): 8589052 kB' 'Inactive(anon): 0 kB' 'Active(file): 331556 kB' 'Inactive(file): 1090320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9665696 kB' 'Mapped: 68056 kB' 'AnonPages: 348412 kB' 'Shmem: 8243820 kB' 'KernelStack: 8056 kB' 'PageTables: 6448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138384 kB' 'Slab: 330196 kB' 'SReclaimable: 138384 kB' 'SUnreclaim: 191812 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 15657940 kB' 'MemUsed: 12006848 kB' 'SwapCached: 0 kB' 'Active: 6795284 kB' 'Inactive: 3391408 kB' 'Active(anon): 6512560 kB' 'Inactive(anon): 0 kB' 'Active(file): 282724 kB' 'Inactive(file): 3391408 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10057340 kB' 'Mapped: 121552 kB' 'AnonPages: 129436 kB' 'Shmem: 6383208 kB' 'KernelStack: 5160 kB' 'PageTables: 2380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 112180 kB' 'Slab: 299588 kB' 'SReclaimable: 112180 kB' 'SUnreclaim: 187408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.424 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:26.425 node0=512 expecting 512 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:26.425 node1=512 expecting 512 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:26.425 00:05:26.425 real 0m1.530s 00:05:26.425 user 0m0.639s 00:05:26.425 sys 0m0.859s 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.425 15:23:39 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:26.425 ************************************ 00:05:26.425 END TEST even_2G_alloc 00:05:26.425 ************************************ 00:05:26.722 15:23:39 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:26.722 15:23:39 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.722 15:23:39 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.722 15:23:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:26.722 ************************************ 00:05:26.722 START TEST odd_alloc 00:05:26.722 ************************************ 00:05:26.722 15:23:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:05:26.722 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:26.722 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:26.722 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:26.722 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:26.722 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:26.722 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:26.722 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:26.722 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:26.722 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:26.722 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:26.722 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:26.723 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:26.723 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:26.723 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:26.723 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:26.723 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:26.723 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:26.723 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:26.723 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:26.723 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:26.723 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:26.723 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:26.723 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:26.723 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:26.723 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:26.723 15:23:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:26.723 15:23:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.723 15:23:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:28.109 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:28.109 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:28.109 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:28.109 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:28.109 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:28.109 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:28.109 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:28.109 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:28.109 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:28.109 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:28.109 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:28.109 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:28.109 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:28.109 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:28.109 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:28.109 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:28.109 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36327340 kB' 'MemAvailable: 41054748 kB' 'Buffers: 2696 kB' 'Cached: 19720432 kB' 'SwapCached: 0 kB' 'Active: 15708128 kB' 'Inactive: 4481728 kB' 'Active(anon): 15093848 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470076 kB' 'Mapped: 188588 kB' 'Shmem: 14627120 kB' 'KReclaimable: 250476 kB' 'Slab: 629376 kB' 'SReclaimable: 250476 kB' 'SUnreclaim: 378900 kB' 'KernelStack: 12864 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 16206644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198396 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.109 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.110 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36330948 kB' 'MemAvailable: 41058356 kB' 'Buffers: 2696 kB' 'Cached: 19720436 kB' 'SwapCached: 0 kB' 'Active: 15708116 kB' 'Inactive: 4481728 kB' 'Active(anon): 15093836 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470116 kB' 'Mapped: 188600 kB' 'Shmem: 14627124 kB' 'KReclaimable: 250476 kB' 'Slab: 629380 kB' 'SReclaimable: 250476 kB' 'SUnreclaim: 378904 kB' 'KernelStack: 12912 kB' 'PageTables: 7960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 16206660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198380 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.111 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.112 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36331316 kB' 'MemAvailable: 41058724 kB' 'Buffers: 2696 kB' 'Cached: 19720436 kB' 'SwapCached: 0 kB' 'Active: 15707780 kB' 'Inactive: 4481728 kB' 'Active(anon): 15093500 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469780 kB' 'Mapped: 188600 kB' 'Shmem: 14627124 kB' 'KReclaimable: 250476 kB' 'Slab: 629380 kB' 'SReclaimable: 250476 kB' 'SUnreclaim: 378904 kB' 'KernelStack: 12880 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 16206680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198380 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.113 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.114 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:28.115 nr_hugepages=1025 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:28.115 resv_hugepages=0 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:28.115 surplus_hugepages=0 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:28.115 anon_hugepages=0 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36331064 kB' 'MemAvailable: 41058472 kB' 'Buffers: 2696 kB' 'Cached: 19720492 kB' 'SwapCached: 0 kB' 'Active: 15707464 kB' 'Inactive: 4481728 kB' 'Active(anon): 15093184 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469348 kB' 'Mapped: 188520 kB' 'Shmem: 14627180 kB' 'KReclaimable: 250476 kB' 'Slab: 629360 kB' 'SReclaimable: 250476 kB' 'SUnreclaim: 378884 kB' 'KernelStack: 12880 kB' 'PageTables: 7844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 16206700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198396 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.115 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.116 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20694884 kB' 'MemUsed: 12182056 kB' 'SwapCached: 0 kB' 'Active: 8914984 kB' 'Inactive: 1090320 kB' 'Active(anon): 8583428 kB' 'Inactive(anon): 0 kB' 'Active(file): 331556 kB' 'Inactive(file): 1090320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9665708 kB' 'Mapped: 67116 kB' 'AnonPages: 342804 kB' 'Shmem: 8243832 kB' 'KernelStack: 7736 kB' 'PageTables: 5468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138312 kB' 'Slab: 329840 kB' 'SReclaimable: 138312 kB' 'SUnreclaim: 191528 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.117 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 15636764 kB' 'MemUsed: 12028024 kB' 'SwapCached: 0 kB' 'Active: 6793444 kB' 'Inactive: 3391408 kB' 'Active(anon): 6510720 kB' 'Inactive(anon): 0 kB' 'Active(file): 282724 kB' 'Inactive(file): 3391408 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10057504 kB' 'Mapped: 121404 kB' 'AnonPages: 127516 kB' 'Shmem: 6383372 kB' 'KernelStack: 5176 kB' 'PageTables: 2496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 112164 kB' 'Slab: 299520 kB' 'SReclaimable: 112164 kB' 'SUnreclaim: 187356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.118 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:28.119 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:28.120 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:28.120 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:28.120 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:28.120 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:28.120 node0=512 expecting 513 00:05:28.120 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:28.120 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:28.120 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:28.120 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:28.120 node1=513 expecting 512 00:05:28.120 15:23:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:28.120 00:05:28.120 real 0m1.610s 00:05:28.120 user 0m0.679s 00:05:28.120 sys 0m0.896s 00:05:28.120 15:23:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.120 15:23:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:28.120 ************************************ 00:05:28.120 END TEST odd_alloc 00:05:28.120 ************************************ 00:05:28.120 15:23:41 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:28.120 15:23:41 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:28.120 15:23:41 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.120 15:23:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:28.378 ************************************ 00:05:28.378 START TEST custom_alloc 00:05:28.378 ************************************ 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:28.378 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:28.379 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:28.379 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:28.379 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:28.379 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:28.379 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:28.379 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:28.379 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:28.379 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:28.379 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:28.379 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:28.379 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:28.379 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:28.379 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:28.379 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:28.379 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:28.379 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:28.379 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:28.379 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:28.379 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:28.379 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:28.379 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:28.379 15:23:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:28.379 15:23:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.379 15:23:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:29.783 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:29.783 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:29.783 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:29.783 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:29.783 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:29.783 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:29.783 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:29.783 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:29.783 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:29.783 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:29.783 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:29.783 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:29.783 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:29.783 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:29.783 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:29.783 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:29.783 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:29.783 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:29.783 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:29.783 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:29.783 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:29.783 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:29.783 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:29.783 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:29.783 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:29.783 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:29.783 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:29.783 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:29.783 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 35273760 kB' 'MemAvailable: 40001168 kB' 'Buffers: 2696 kB' 'Cached: 19720560 kB' 'SwapCached: 0 kB' 'Active: 15707696 kB' 'Inactive: 4481728 kB' 'Active(anon): 15093416 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469520 kB' 'Mapped: 188564 kB' 'Shmem: 14627248 kB' 'KReclaimable: 250476 kB' 'Slab: 629216 kB' 'SReclaimable: 250476 kB' 'SUnreclaim: 378740 kB' 'KernelStack: 12896 kB' 'PageTables: 7932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 16206532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198460 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.784 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 35274496 kB' 'MemAvailable: 40001904 kB' 'Buffers: 2696 kB' 'Cached: 19720564 kB' 'SwapCached: 0 kB' 'Active: 15708568 kB' 'Inactive: 4481728 kB' 'Active(anon): 15094288 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470388 kB' 'Mapped: 188640 kB' 'Shmem: 14627252 kB' 'KReclaimable: 250476 kB' 'Slab: 629276 kB' 'SReclaimable: 250476 kB' 'SUnreclaim: 378800 kB' 'KernelStack: 12912 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 16206552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198428 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.785 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.786 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 35274604 kB' 'MemAvailable: 40002012 kB' 'Buffers: 2696 kB' 'Cached: 19720580 kB' 'SwapCached: 0 kB' 'Active: 15707784 kB' 'Inactive: 4481728 kB' 'Active(anon): 15093504 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469476 kB' 'Mapped: 188500 kB' 'Shmem: 14627268 kB' 'KReclaimable: 250476 kB' 'Slab: 629284 kB' 'SReclaimable: 250476 kB' 'SUnreclaim: 378808 kB' 'KernelStack: 12864 kB' 'PageTables: 7728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 16206572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198428 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.787 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.788 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:29.789 nr_hugepages=1536 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:29.789 resv_hugepages=0 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:29.789 surplus_hugepages=0 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:29.789 anon_hugepages=0 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 35274508 kB' 'MemAvailable: 40001916 kB' 'Buffers: 2696 kB' 'Cached: 19720608 kB' 'SwapCached: 0 kB' 'Active: 15708248 kB' 'Inactive: 4481728 kB' 'Active(anon): 15093968 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469900 kB' 'Mapped: 188500 kB' 'Shmem: 14627296 kB' 'KReclaimable: 250476 kB' 'Slab: 629284 kB' 'SReclaimable: 250476 kB' 'SUnreclaim: 378808 kB' 'KernelStack: 12912 kB' 'PageTables: 7884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 16206964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198444 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.789 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.790 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20672312 kB' 'MemUsed: 12204628 kB' 'SwapCached: 0 kB' 'Active: 8915308 kB' 'Inactive: 1090320 kB' 'Active(anon): 8583752 kB' 'Inactive(anon): 0 kB' 'Active(file): 331556 kB' 'Inactive(file): 1090320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9665752 kB' 'Mapped: 67096 kB' 'AnonPages: 342984 kB' 'Shmem: 8243876 kB' 'KernelStack: 7736 kB' 'PageTables: 5468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138312 kB' 'Slab: 329804 kB' 'SReclaimable: 138312 kB' 'SUnreclaim: 191492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.791 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.792 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 14602196 kB' 'MemUsed: 13062592 kB' 'SwapCached: 0 kB' 'Active: 6792980 kB' 'Inactive: 3391408 kB' 'Active(anon): 6510256 kB' 'Inactive(anon): 0 kB' 'Active(file): 282724 kB' 'Inactive(file): 3391408 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10057572 kB' 'Mapped: 121404 kB' 'AnonPages: 126908 kB' 'Shmem: 6383440 kB' 'KernelStack: 5176 kB' 'PageTables: 2416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 112164 kB' 'Slab: 299480 kB' 'SReclaimable: 112164 kB' 'SUnreclaim: 187316 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.793 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:29.794 node0=512 expecting 512 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:29.794 node1=1024 expecting 1024 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:29.794 00:05:29.794 real 0m1.530s 00:05:29.794 user 0m0.610s 00:05:29.794 sys 0m0.885s 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.794 15:23:42 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:29.794 ************************************ 00:05:29.794 END TEST custom_alloc 00:05:29.794 ************************************ 00:05:29.794 15:23:42 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:29.794 15:23:42 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.794 15:23:42 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.794 15:23:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:29.794 ************************************ 00:05:29.794 START TEST no_shrink_alloc 00:05:29.794 ************************************ 00:05:29.794 15:23:42 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:05:29.794 15:23:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:29.794 15:23:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:29.794 15:23:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:29.794 15:23:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:29.794 15:23:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:29.794 15:23:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:29.794 15:23:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:29.794 15:23:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:29.794 15:23:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:29.794 15:23:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:29.794 15:23:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:29.794 15:23:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:29.794 15:23:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:29.794 15:23:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:29.794 15:23:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:29.794 15:23:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:29.794 15:23:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:29.794 15:23:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:29.794 15:23:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:29.794 15:23:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:29.794 15:23:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.794 15:23:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:31.168 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:31.168 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:31.168 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:31.168 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:31.168 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:31.168 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:31.168 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:31.168 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:31.168 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:31.168 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:31.168 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:31.168 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:31.168 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:31.168 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:31.168 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:31.168 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:31.168 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36335096 kB' 'MemAvailable: 41062504 kB' 'Buffers: 2696 kB' 'Cached: 19720692 kB' 'SwapCached: 0 kB' 'Active: 15709120 kB' 'Inactive: 4481728 kB' 'Active(anon): 15094840 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470724 kB' 'Mapped: 188608 kB' 'Shmem: 14627380 kB' 'KReclaimable: 250476 kB' 'Slab: 629356 kB' 'SReclaimable: 250476 kB' 'SUnreclaim: 378880 kB' 'KernelStack: 12944 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 16207160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198556 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.168 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.169 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36338120 kB' 'MemAvailable: 41065528 kB' 'Buffers: 2696 kB' 'Cached: 19720696 kB' 'SwapCached: 0 kB' 'Active: 15711604 kB' 'Inactive: 4481728 kB' 'Active(anon): 15097324 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473220 kB' 'Mapped: 189120 kB' 'Shmem: 14627384 kB' 'KReclaimable: 250476 kB' 'Slab: 629396 kB' 'SReclaimable: 250476 kB' 'SUnreclaim: 378920 kB' 'KernelStack: 12928 kB' 'PageTables: 7956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 16209988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198540 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.432 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.433 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36338020 kB' 'MemAvailable: 41065428 kB' 'Buffers: 2696 kB' 'Cached: 19720696 kB' 'SwapCached: 0 kB' 'Active: 15713456 kB' 'Inactive: 4481728 kB' 'Active(anon): 15099176 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475064 kB' 'Mapped: 189056 kB' 'Shmem: 14627384 kB' 'KReclaimable: 250476 kB' 'Slab: 629396 kB' 'SReclaimable: 250476 kB' 'SUnreclaim: 378920 kB' 'KernelStack: 12864 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 16213320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198544 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.434 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.435 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:31.436 nr_hugepages=1024 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:31.436 resv_hugepages=0 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:31.436 surplus_hugepages=0 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:31.436 anon_hugepages=0 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36338280 kB' 'MemAvailable: 41065688 kB' 'Buffers: 2696 kB' 'Cached: 19720732 kB' 'SwapCached: 0 kB' 'Active: 15708472 kB' 'Inactive: 4481728 kB' 'Active(anon): 15094192 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469976 kB' 'Mapped: 188940 kB' 'Shmem: 14627420 kB' 'KReclaimable: 250476 kB' 'Slab: 629372 kB' 'SReclaimable: 250476 kB' 'SUnreclaim: 378896 kB' 'KernelStack: 12960 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 16207652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198556 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.436 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.437 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.438 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19654340 kB' 'MemUsed: 13222600 kB' 'SwapCached: 0 kB' 'Active: 8915404 kB' 'Inactive: 1090320 kB' 'Active(anon): 8583848 kB' 'Inactive(anon): 0 kB' 'Active(file): 331556 kB' 'Inactive(file): 1090320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9665868 kB' 'Mapped: 67136 kB' 'AnonPages: 343016 kB' 'Shmem: 8243992 kB' 'KernelStack: 7736 kB' 'PageTables: 5516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138312 kB' 'Slab: 329880 kB' 'SReclaimable: 138312 kB' 'SUnreclaim: 191568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.439 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:31.440 node0=1024 expecting 1024 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.440 15:23:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:32.821 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:32.821 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:32.821 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:32.821 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:32.821 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:32.821 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:32.821 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:32.821 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:32.821 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:32.821 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:32.821 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:32.821 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:32.821 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:32.821 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:32.821 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:32.821 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:32.821 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:32.821 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36318260 kB' 'MemAvailable: 41045668 kB' 'Buffers: 2696 kB' 'Cached: 19720800 kB' 'SwapCached: 0 kB' 'Active: 15709564 kB' 'Inactive: 4481728 kB' 'Active(anon): 15095284 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471020 kB' 'Mapped: 188516 kB' 'Shmem: 14627488 kB' 'KReclaimable: 250476 kB' 'Slab: 629432 kB' 'SReclaimable: 250476 kB' 'SUnreclaim: 378956 kB' 'KernelStack: 12912 kB' 'PageTables: 7844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 16207472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198492 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.821 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.822 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36318008 kB' 'MemAvailable: 41045416 kB' 'Buffers: 2696 kB' 'Cached: 19720800 kB' 'SwapCached: 0 kB' 'Active: 15710156 kB' 'Inactive: 4481728 kB' 'Active(anon): 15095876 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471668 kB' 'Mapped: 188592 kB' 'Shmem: 14627488 kB' 'KReclaimable: 250476 kB' 'Slab: 629464 kB' 'SReclaimable: 250476 kB' 'SUnreclaim: 378988 kB' 'KernelStack: 12944 kB' 'PageTables: 7964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 16207488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198476 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.823 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:32.824 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36318564 kB' 'MemAvailable: 41045972 kB' 'Buffers: 2696 kB' 'Cached: 19720804 kB' 'SwapCached: 0 kB' 'Active: 15710060 kB' 'Inactive: 4481728 kB' 'Active(anon): 15095780 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471548 kB' 'Mapped: 188592 kB' 'Shmem: 14627492 kB' 'KReclaimable: 250476 kB' 'Slab: 629440 kB' 'SReclaimable: 250476 kB' 'SUnreclaim: 378964 kB' 'KernelStack: 12928 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 16207144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198444 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.825 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.826 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:32.827 nr_hugepages=1024 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:32.827 resv_hugepages=0 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:32.827 surplus_hugepages=0 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:32.827 anon_hugepages=0 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36318364 kB' 'MemAvailable: 41045772 kB' 'Buffers: 2696 kB' 'Cached: 19720844 kB' 'SwapCached: 0 kB' 'Active: 15709088 kB' 'Inactive: 4481728 kB' 'Active(anon): 15094808 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470484 kB' 'Mapped: 188516 kB' 'Shmem: 14627532 kB' 'KReclaimable: 250476 kB' 'Slab: 629380 kB' 'SReclaimable: 250476 kB' 'SUnreclaim: 378904 kB' 'KernelStack: 12880 kB' 'PageTables: 7732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 16207296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198412 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 20144128 kB' 'DirectMap1G: 47185920 kB' 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.827 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.828 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19646684 kB' 'MemUsed: 13230256 kB' 'SwapCached: 0 kB' 'Active: 8915772 kB' 'Inactive: 1090320 kB' 'Active(anon): 8584216 kB' 'Inactive(anon): 0 kB' 'Active(file): 331556 kB' 'Inactive(file): 1090320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9665980 kB' 'Mapped: 67112 kB' 'AnonPages: 343264 kB' 'Shmem: 8244104 kB' 'KernelStack: 7720 kB' 'PageTables: 5376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138312 kB' 'Slab: 329796 kB' 'SReclaimable: 138312 kB' 'SUnreclaim: 191484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.829 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.830 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.088 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.088 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.088 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.088 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.088 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.088 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.088 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.088 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.088 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.088 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.088 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:33.089 node0=1024 expecting 1024 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:33.089 00:05:33.089 real 0m3.134s 00:05:33.089 user 0m1.297s 00:05:33.089 sys 0m1.774s 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.089 15:23:45 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:33.089 ************************************ 00:05:33.089 END TEST no_shrink_alloc 00:05:33.089 ************************************ 00:05:33.089 15:23:45 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:33.089 15:23:45 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:33.089 15:23:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:33.089 15:23:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:33.089 15:23:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:33.089 15:23:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:33.089 15:23:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:33.089 15:23:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:33.089 15:23:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:33.089 15:23:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:33.089 15:23:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:33.089 15:23:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:33.089 15:23:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:33.089 15:23:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:33.089 00:05:33.089 real 0m12.459s 00:05:33.089 user 0m4.765s 00:05:33.089 sys 0m6.554s 00:05:33.089 15:23:45 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.089 15:23:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:33.089 ************************************ 00:05:33.089 END TEST hugepages 00:05:33.089 ************************************ 00:05:33.089 15:23:45 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:33.089 15:23:45 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:33.089 15:23:45 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.089 15:23:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:33.089 ************************************ 00:05:33.089 START TEST driver 00:05:33.089 ************************************ 00:05:33.089 15:23:46 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:33.089 * Looking for test storage... 00:05:33.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:33.089 15:23:46 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:33.089 15:23:46 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:33.089 15:23:46 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:35.616 15:23:48 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:35.616 15:23:48 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.616 15:23:48 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.616 15:23:48 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:35.616 ************************************ 00:05:35.616 START TEST guess_driver 00:05:35.616 ************************************ 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 189 > 0 )) 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:35.616 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:35.616 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:35.616 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:35.616 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:35.616 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:35.616 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:35.616 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:35.616 Looking for driver=vfio-pci 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:35.616 15:23:48 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:36.990 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:36.991 15:23:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:37.926 15:23:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:37.926 15:23:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:37.926 15:23:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:37.926 15:23:50 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:37.926 15:23:50 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:37.926 15:23:50 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:37.926 15:23:50 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:40.455 00:05:40.455 real 0m5.029s 00:05:40.455 user 0m1.210s 00:05:40.455 sys 0m1.968s 00:05:40.455 15:23:53 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.456 15:23:53 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:40.456 ************************************ 00:05:40.456 END TEST guess_driver 00:05:40.456 ************************************ 00:05:40.456 00:05:40.456 real 0m7.470s 00:05:40.456 user 0m1.765s 00:05:40.456 sys 0m3.021s 00:05:40.456 15:23:53 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.456 15:23:53 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:40.456 ************************************ 00:05:40.456 END TEST driver 00:05:40.456 ************************************ 00:05:40.456 15:23:53 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:40.456 15:23:53 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.456 15:23:53 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.456 15:23:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:40.456 ************************************ 00:05:40.456 START TEST devices 00:05:40.456 ************************************ 00:05:40.456 15:23:53 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:40.713 * Looking for test storage... 00:05:40.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:40.713 15:23:53 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:40.713 15:23:53 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:40.713 15:23:53 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:40.713 15:23:53 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:42.611 15:23:55 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:42.611 15:23:55 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:42.611 15:23:55 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:42.611 15:23:55 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:42.611 15:23:55 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:42.611 15:23:55 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:42.611 15:23:55 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:42.611 15:23:55 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:42.611 15:23:55 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:42.611 15:23:55 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:42.611 15:23:55 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:42.611 15:23:55 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:42.611 15:23:55 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:42.611 15:23:55 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:42.611 15:23:55 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:42.611 15:23:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:42.611 15:23:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:42.611 15:23:55 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:0b:00.0 00:05:42.611 15:23:55 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:05:42.611 15:23:55 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:42.611 15:23:55 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:42.611 15:23:55 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:42.611 No valid GPT data, bailing 00:05:42.611 15:23:55 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:42.611 15:23:55 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:42.611 15:23:55 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:42.611 15:23:55 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:42.611 15:23:55 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:42.611 15:23:55 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:42.611 15:23:55 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:42.611 15:23:55 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:42.611 15:23:55 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:42.611 15:23:55 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:0b:00.0 00:05:42.611 15:23:55 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:42.611 15:23:55 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:42.611 15:23:55 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:42.611 15:23:55 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:42.611 15:23:55 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.611 15:23:55 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:42.611 ************************************ 00:05:42.611 START TEST nvme_mount 00:05:42.611 ************************************ 00:05:42.611 15:23:55 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:05:42.611 15:23:55 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:42.611 15:23:55 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:42.611 15:23:55 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:42.611 15:23:55 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:42.611 15:23:55 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:42.611 15:23:55 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:42.611 15:23:55 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:42.611 15:23:55 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:42.611 15:23:55 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:42.611 15:23:55 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:42.611 15:23:55 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:42.611 15:23:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:42.611 15:23:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:42.611 15:23:55 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:42.611 15:23:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:42.611 15:23:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:42.611 15:23:55 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:42.611 15:23:55 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:42.611 15:23:55 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:43.580 Creating new GPT entries in memory. 00:05:43.580 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:43.580 other utilities. 00:05:43.580 15:23:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:43.580 15:23:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:43.580 15:23:56 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:43.580 15:23:56 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:43.580 15:23:56 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:44.513 Creating new GPT entries in memory. 00:05:44.513 The operation has completed successfully. 00:05:44.513 15:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:44.513 15:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:44.513 15:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1164096 00:05:44.513 15:23:57 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:44.513 15:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:44.513 15:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:44.513 15:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:44.513 15:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:44.513 15:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:44.513 15:23:57 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:44.513 15:23:57 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:05:44.513 15:23:57 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:44.513 15:23:57 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:44.513 15:23:57 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:44.514 15:23:57 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:44.514 15:23:57 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:44.514 15:23:57 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:44.514 15:23:57 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:44.514 15:23:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.514 15:23:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:05:44.514 15:23:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:44.514 15:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.514 15:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:45.886 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:45.886 15:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:46.143 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:46.143 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:46.143 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:46.143 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:46.143 15:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:46.143 15:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:46.143 15:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:46.143 15:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:46.143 15:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:46.143 15:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:46.143 15:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:46.143 15:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:05:46.143 15:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:46.143 15:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:46.143 15:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:46.143 15:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:46.143 15:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:46.143 15:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:46.144 15:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:46.144 15:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.144 15:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:05:46.144 15:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:46.144 15:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:46.144 15:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.512 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:47.513 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.513 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:47.513 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:47.513 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:47.513 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:47.513 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:47.513 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:47.513 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:0b:00.0 data@nvme0n1 '' '' 00:05:47.513 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:05:47.513 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:47.513 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:47.513 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:47.513 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:47.513 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:47.513 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:47.513 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.513 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:05:47.513 15:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:47.513 15:24:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:47.513 15:24:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:48.885 15:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.143 15:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:49.143 15:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:49.143 15:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:49.143 15:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:49.143 15:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:49.143 15:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:49.143 15:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:49.143 15:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:49.143 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:49.143 00:05:49.143 real 0m6.657s 00:05:49.143 user 0m1.594s 00:05:49.143 sys 0m2.650s 00:05:49.143 15:24:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.143 15:24:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:49.143 ************************************ 00:05:49.143 END TEST nvme_mount 00:05:49.143 ************************************ 00:05:49.143 15:24:02 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:49.143 15:24:02 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:49.143 15:24:02 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.143 15:24:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:49.143 ************************************ 00:05:49.143 START TEST dm_mount 00:05:49.143 ************************************ 00:05:49.143 15:24:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:49.143 15:24:02 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:49.143 15:24:02 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:49.143 15:24:02 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:49.143 15:24:02 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:49.143 15:24:02 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:49.143 15:24:02 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:49.143 15:24:02 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:49.143 15:24:02 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:49.143 15:24:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:49.143 15:24:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:49.143 15:24:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:49.143 15:24:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:49.143 15:24:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:49.143 15:24:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:49.143 15:24:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:49.143 15:24:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:49.143 15:24:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:49.143 15:24:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:49.143 15:24:02 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:49.143 15:24:02 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:49.143 15:24:02 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:50.076 Creating new GPT entries in memory. 00:05:50.076 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:50.076 other utilities. 00:05:50.076 15:24:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:50.076 15:24:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:50.076 15:24:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:50.076 15:24:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:50.076 15:24:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:51.447 Creating new GPT entries in memory. 00:05:51.448 The operation has completed successfully. 00:05:51.448 15:24:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:51.448 15:24:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:51.448 15:24:04 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:51.448 15:24:04 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:51.448 15:24:04 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:52.381 The operation has completed successfully. 00:05:52.381 15:24:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:52.381 15:24:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:52.381 15:24:05 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1166894 00:05:52.381 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:52.381 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:0b:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:52.382 15:24:05 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:0b:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:53.756 15:24:06 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:55.129 15:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.129 15:24:08 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:55.129 15:24:08 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:55.129 15:24:08 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:55.129 15:24:08 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:55.129 15:24:08 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:55.129 15:24:08 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:55.129 15:24:08 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:55.129 15:24:08 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:55.129 15:24:08 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:55.129 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:55.129 15:24:08 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:55.129 15:24:08 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:55.129 00:05:55.129 real 0m6.100s 00:05:55.129 user 0m1.128s 00:05:55.129 sys 0m1.859s 00:05:55.129 15:24:08 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.129 15:24:08 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:55.129 ************************************ 00:05:55.129 END TEST dm_mount 00:05:55.129 ************************************ 00:05:55.129 15:24:08 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:55.129 15:24:08 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:55.129 15:24:08 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:55.129 15:24:08 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:55.129 15:24:08 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:55.129 15:24:08 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:55.129 15:24:08 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:55.387 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:55.387 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:55.387 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:55.387 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:55.387 15:24:08 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:55.387 15:24:08 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:55.387 15:24:08 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:55.387 15:24:08 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:55.387 15:24:08 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:55.387 15:24:08 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:55.387 15:24:08 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:55.387 00:05:55.387 real 0m14.963s 00:05:55.387 user 0m3.483s 00:05:55.387 sys 0m5.722s 00:05:55.387 15:24:08 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.387 15:24:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:55.387 ************************************ 00:05:55.387 END TEST devices 00:05:55.387 ************************************ 00:05:55.645 00:05:55.645 real 0m46.326s 00:05:55.645 user 0m13.681s 00:05:55.645 sys 0m21.305s 00:05:55.645 15:24:08 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.645 15:24:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:55.645 ************************************ 00:05:55.645 END TEST setup.sh 00:05:55.645 ************************************ 00:05:55.645 15:24:08 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:57.017 Hugepages 00:05:57.017 node hugesize free / total 00:05:57.017 node0 1048576kB 0 / 0 00:05:57.017 node0 2048kB 2048 / 2048 00:05:57.017 node1 1048576kB 0 / 0 00:05:57.017 node1 2048kB 0 / 0 00:05:57.017 00:05:57.017 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:57.017 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:57.017 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:57.017 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:57.017 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:57.017 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:57.017 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:57.017 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:57.017 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:57.017 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:57.017 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:57.017 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:57.017 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:57.017 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:57.017 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:57.017 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:57.017 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:57.017 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:57.017 15:24:09 -- spdk/autotest.sh@130 -- # uname -s 00:05:57.017 15:24:09 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:57.017 15:24:09 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:57.017 15:24:09 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:58.399 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:58.399 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:58.399 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:58.399 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:58.399 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:58.399 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:58.399 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:58.399 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:58.399 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:58.399 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:58.399 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:58.399 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:58.399 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:58.399 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:58.399 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:58.399 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:59.331 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:05:59.331 15:24:12 -- common/autotest_common.sh@1528 -- # sleep 1 00:06:00.702 15:24:13 -- common/autotest_common.sh@1529 -- # bdfs=() 00:06:00.702 15:24:13 -- common/autotest_common.sh@1529 -- # local bdfs 00:06:00.702 15:24:13 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:06:00.702 15:24:13 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:06:00.702 15:24:13 -- common/autotest_common.sh@1509 -- # bdfs=() 00:06:00.702 15:24:13 -- common/autotest_common.sh@1509 -- # local bdfs 00:06:00.702 15:24:13 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:00.702 15:24:13 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:00.702 15:24:13 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:06:00.702 15:24:13 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:06:00.702 15:24:13 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:0b:00.0 00:06:00.702 15:24:13 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:01.638 Waiting for block devices as requested 00:06:01.639 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:06:01.929 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:06:01.929 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:06:01.929 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:06:01.929 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:06:01.929 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:06:02.226 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:06:02.226 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:06:02.226 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:06:02.226 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:06:02.484 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:06:02.484 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:06:02.484 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:06:02.484 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:06:02.742 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:06:02.742 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:06:02.742 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:06:03.001 15:24:15 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:03.001 15:24:15 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:06:03.001 15:24:15 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:06:03.001 15:24:15 -- common/autotest_common.sh@1498 -- # grep 0000:0b:00.0/nvme/nvme 00:06:03.001 15:24:15 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:06:03.001 15:24:15 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:06:03.001 15:24:15 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:06:03.001 15:24:15 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:06:03.001 15:24:15 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:06:03.001 15:24:15 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:06:03.001 15:24:15 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:06:03.001 15:24:15 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:03.001 15:24:15 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:03.001 15:24:15 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:06:03.001 15:24:15 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:03.001 15:24:15 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:03.001 15:24:15 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:06:03.001 15:24:15 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:03.001 15:24:15 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:03.001 15:24:15 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:03.001 15:24:15 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:03.001 15:24:15 -- common/autotest_common.sh@1553 -- # continue 00:06:03.001 15:24:15 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:03.001 15:24:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.001 15:24:15 -- common/autotest_common.sh@10 -- # set +x 00:06:03.001 15:24:15 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:03.001 15:24:15 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:03.001 15:24:15 -- common/autotest_common.sh@10 -- # set +x 00:06:03.001 15:24:15 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:04.377 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:04.377 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:04.377 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:04.377 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:04.377 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:04.377 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:04.377 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:04.377 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:04.377 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:04.377 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:04.377 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:04.377 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:04.377 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:04.377 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:04.377 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:04.377 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:05.314 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:06:05.314 15:24:18 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:05.314 15:24:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:05.314 15:24:18 -- common/autotest_common.sh@10 -- # set +x 00:06:05.314 15:24:18 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:05.314 15:24:18 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:06:05.314 15:24:18 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:06:05.314 15:24:18 -- common/autotest_common.sh@1573 -- # bdfs=() 00:06:05.314 15:24:18 -- common/autotest_common.sh@1573 -- # local bdfs 00:06:05.314 15:24:18 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:06:05.314 15:24:18 -- common/autotest_common.sh@1509 -- # bdfs=() 00:06:05.314 15:24:18 -- common/autotest_common.sh@1509 -- # local bdfs 00:06:05.314 15:24:18 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:05.314 15:24:18 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:05.314 15:24:18 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:06:05.572 15:24:18 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:06:05.572 15:24:18 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:0b:00.0 00:06:05.572 15:24:18 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:05.572 15:24:18 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:06:05.572 15:24:18 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:06:05.572 15:24:18 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:05.572 15:24:18 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:06:05.572 15:24:18 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:0b:00.0 00:06:05.572 15:24:18 -- common/autotest_common.sh@1588 -- # [[ -z 0000:0b:00.0 ]] 00:06:05.572 15:24:18 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=1173284 00:06:05.572 15:24:18 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:05.572 15:24:18 -- common/autotest_common.sh@1594 -- # waitforlisten 1173284 00:06:05.572 15:24:18 -- common/autotest_common.sh@827 -- # '[' -z 1173284 ']' 00:06:05.572 15:24:18 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.572 15:24:18 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:05.573 15:24:18 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.573 15:24:18 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:05.573 15:24:18 -- common/autotest_common.sh@10 -- # set +x 00:06:05.573 [2024-05-15 15:24:18.508907] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:06:05.573 [2024-05-15 15:24:18.508982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1173284 ] 00:06:05.573 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.573 [2024-05-15 15:24:18.544943] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:05.573 [2024-05-15 15:24:18.575303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.573 [2024-05-15 15:24:18.659451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.830 15:24:18 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.830 15:24:18 -- common/autotest_common.sh@860 -- # return 0 00:06:05.830 15:24:18 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:06:05.830 15:24:18 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:06:05.830 15:24:18 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:06:09.112 nvme0n1 00:06:09.112 15:24:22 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:09.369 [2024-05-15 15:24:22.241646] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:06:09.369 [2024-05-15 15:24:22.241698] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:06:09.369 request: 00:06:09.369 { 00:06:09.369 "nvme_ctrlr_name": "nvme0", 00:06:09.369 "password": "test", 00:06:09.369 "method": "bdev_nvme_opal_revert", 00:06:09.369 "req_id": 1 00:06:09.369 } 00:06:09.370 Got JSON-RPC error response 00:06:09.370 response: 00:06:09.370 { 00:06:09.370 "code": -32603, 00:06:09.370 "message": "Internal error" 00:06:09.370 } 00:06:09.370 15:24:22 -- common/autotest_common.sh@1600 -- # true 00:06:09.370 15:24:22 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:06:09.370 15:24:22 -- common/autotest_common.sh@1604 -- # killprocess 1173284 00:06:09.370 15:24:22 -- common/autotest_common.sh@946 -- # '[' -z 1173284 ']' 00:06:09.370 15:24:22 -- common/autotest_common.sh@950 -- # kill -0 1173284 00:06:09.370 15:24:22 -- common/autotest_common.sh@951 -- # uname 00:06:09.370 15:24:22 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:09.370 15:24:22 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1173284 00:06:09.370 15:24:22 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:09.370 15:24:22 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:09.370 15:24:22 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1173284' 00:06:09.370 killing process with pid 1173284 00:06:09.370 15:24:22 -- common/autotest_common.sh@965 -- # kill 1173284 00:06:09.370 15:24:22 -- common/autotest_common.sh@970 -- # wait 1173284 00:06:11.266 15:24:23 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:11.266 15:24:23 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:11.266 15:24:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:11.266 15:24:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:11.266 15:24:23 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:11.266 15:24:23 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:11.266 15:24:23 -- common/autotest_common.sh@10 -- # set +x 00:06:11.266 15:24:23 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:11.266 15:24:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:11.266 15:24:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.266 15:24:23 -- common/autotest_common.sh@10 -- # set +x 00:06:11.266 ************************************ 00:06:11.266 START TEST env 00:06:11.266 ************************************ 00:06:11.266 15:24:23 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:11.266 * Looking for test storage... 00:06:11.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:11.266 15:24:24 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:11.266 15:24:24 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:11.266 15:24:24 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.266 15:24:24 env -- common/autotest_common.sh@10 -- # set +x 00:06:11.266 ************************************ 00:06:11.266 START TEST env_memory 00:06:11.266 ************************************ 00:06:11.266 15:24:24 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:11.266 00:06:11.266 00:06:11.266 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.266 http://cunit.sourceforge.net/ 00:06:11.266 00:06:11.266 00:06:11.266 Suite: memory 00:06:11.266 Test: alloc and free memory map ...[2024-05-15 15:24:24.100188] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:11.266 passed 00:06:11.266 Test: mem map translation ...[2024-05-15 15:24:24.120662] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:11.266 [2024-05-15 15:24:24.120684] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:11.266 [2024-05-15 15:24:24.120735] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:11.266 [2024-05-15 15:24:24.120747] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:11.266 passed 00:06:11.266 Test: mem map registration ...[2024-05-15 15:24:24.161900] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:11.266 [2024-05-15 15:24:24.161921] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:11.266 passed 00:06:11.267 Test: mem map adjacent registrations ...passed 00:06:11.267 00:06:11.267 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.267 suites 1 1 n/a 0 0 00:06:11.267 tests 4 4 4 0 0 00:06:11.267 asserts 152 152 152 0 n/a 00:06:11.267 00:06:11.267 Elapsed time = 0.142 seconds 00:06:11.267 00:06:11.267 real 0m0.149s 00:06:11.267 user 0m0.142s 00:06:11.267 sys 0m0.007s 00:06:11.267 15:24:24 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.267 15:24:24 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:11.267 ************************************ 00:06:11.267 END TEST env_memory 00:06:11.267 ************************************ 00:06:11.267 15:24:24 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:11.267 15:24:24 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:11.267 15:24:24 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.267 15:24:24 env -- common/autotest_common.sh@10 -- # set +x 00:06:11.267 ************************************ 00:06:11.267 START TEST env_vtophys 00:06:11.267 ************************************ 00:06:11.267 15:24:24 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:11.267 EAL: lib.eal log level changed from notice to debug 00:06:11.267 EAL: Detected lcore 0 as core 0 on socket 0 00:06:11.267 EAL: Detected lcore 1 as core 1 on socket 0 00:06:11.267 EAL: Detected lcore 2 as core 2 on socket 0 00:06:11.267 EAL: Detected lcore 3 as core 3 on socket 0 00:06:11.267 EAL: Detected lcore 4 as core 4 on socket 0 00:06:11.267 EAL: Detected lcore 5 as core 5 on socket 0 00:06:11.267 EAL: Detected lcore 6 as core 8 on socket 0 00:06:11.267 EAL: Detected lcore 7 as core 9 on socket 0 00:06:11.267 EAL: Detected lcore 8 as core 10 on socket 0 00:06:11.267 EAL: Detected lcore 9 as core 11 on socket 0 00:06:11.267 EAL: Detected lcore 10 as core 12 on socket 0 00:06:11.267 EAL: Detected lcore 11 as core 13 on socket 0 00:06:11.267 EAL: Detected lcore 12 as core 0 on socket 1 00:06:11.267 EAL: Detected lcore 13 as core 1 on socket 1 00:06:11.267 EAL: Detected lcore 14 as core 2 on socket 1 00:06:11.267 EAL: Detected lcore 15 as core 3 on socket 1 00:06:11.267 EAL: Detected lcore 16 as core 4 on socket 1 00:06:11.267 EAL: Detected lcore 17 as core 5 on socket 1 00:06:11.267 EAL: Detected lcore 18 as core 8 on socket 1 00:06:11.267 EAL: Detected lcore 19 as core 9 on socket 1 00:06:11.267 EAL: Detected lcore 20 as core 10 on socket 1 00:06:11.267 EAL: Detected lcore 21 as core 11 on socket 1 00:06:11.267 EAL: Detected lcore 22 as core 12 on socket 1 00:06:11.267 EAL: Detected lcore 23 as core 13 on socket 1 00:06:11.267 EAL: Detected lcore 24 as core 0 on socket 0 00:06:11.267 EAL: Detected lcore 25 as core 1 on socket 0 00:06:11.267 EAL: Detected lcore 26 as core 2 on socket 0 00:06:11.267 EAL: Detected lcore 27 as core 3 on socket 0 00:06:11.267 EAL: Detected lcore 28 as core 4 on socket 0 00:06:11.267 EAL: Detected lcore 29 as core 5 on socket 0 00:06:11.267 EAL: Detected lcore 30 as core 8 on socket 0 00:06:11.267 EAL: Detected lcore 31 as core 9 on socket 0 00:06:11.267 EAL: Detected lcore 32 as core 10 on socket 0 00:06:11.267 EAL: Detected lcore 33 as core 11 on socket 0 00:06:11.267 EAL: Detected lcore 34 as core 12 on socket 0 00:06:11.267 EAL: Detected lcore 35 as core 13 on socket 0 00:06:11.267 EAL: Detected lcore 36 as core 0 on socket 1 00:06:11.267 EAL: Detected lcore 37 as core 1 on socket 1 00:06:11.267 EAL: Detected lcore 38 as core 2 on socket 1 00:06:11.267 EAL: Detected lcore 39 as core 3 on socket 1 00:06:11.267 EAL: Detected lcore 40 as core 4 on socket 1 00:06:11.267 EAL: Detected lcore 41 as core 5 on socket 1 00:06:11.267 EAL: Detected lcore 42 as core 8 on socket 1 00:06:11.267 EAL: Detected lcore 43 as core 9 on socket 1 00:06:11.267 EAL: Detected lcore 44 as core 10 on socket 1 00:06:11.267 EAL: Detected lcore 45 as core 11 on socket 1 00:06:11.267 EAL: Detected lcore 46 as core 12 on socket 1 00:06:11.267 EAL: Detected lcore 47 as core 13 on socket 1 00:06:11.267 EAL: Maximum logical cores by configuration: 128 00:06:11.267 EAL: Detected CPU lcores: 48 00:06:11.267 EAL: Detected NUMA nodes: 2 00:06:11.267 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:06:11.267 EAL: Detected shared linkage of DPDK 00:06:11.267 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:06:11.267 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:06:11.267 EAL: Registered [vdev] bus. 00:06:11.267 EAL: bus.vdev log level changed from disabled to notice 00:06:11.267 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:06:11.267 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:06:11.267 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:11.267 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:11.267 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:06:11.267 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:06:11.267 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:06:11.267 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:06:11.267 EAL: No shared files mode enabled, IPC will be disabled 00:06:11.267 EAL: No shared files mode enabled, IPC is disabled 00:06:11.267 EAL: Bus pci wants IOVA as 'DC' 00:06:11.267 EAL: Bus vdev wants IOVA as 'DC' 00:06:11.267 EAL: Buses did not request a specific IOVA mode. 00:06:11.267 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:11.267 EAL: Selected IOVA mode 'VA' 00:06:11.267 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.267 EAL: Probing VFIO support... 00:06:11.267 EAL: IOMMU type 1 (Type 1) is supported 00:06:11.267 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:11.267 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:11.267 EAL: VFIO support initialized 00:06:11.267 EAL: Ask a virtual area of 0x2e000 bytes 00:06:11.267 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:11.267 EAL: Setting up physically contiguous memory... 00:06:11.267 EAL: Setting maximum number of open files to 524288 00:06:11.267 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:11.267 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:11.267 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:11.267 EAL: Ask a virtual area of 0x61000 bytes 00:06:11.267 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:11.267 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:11.267 EAL: Ask a virtual area of 0x400000000 bytes 00:06:11.267 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:11.267 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:11.267 EAL: Ask a virtual area of 0x61000 bytes 00:06:11.267 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:11.267 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:11.267 EAL: Ask a virtual area of 0x400000000 bytes 00:06:11.267 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:11.267 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:11.267 EAL: Ask a virtual area of 0x61000 bytes 00:06:11.267 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:11.267 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:11.267 EAL: Ask a virtual area of 0x400000000 bytes 00:06:11.267 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:11.267 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:11.267 EAL: Ask a virtual area of 0x61000 bytes 00:06:11.267 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:11.267 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:11.267 EAL: Ask a virtual area of 0x400000000 bytes 00:06:11.267 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:11.267 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:11.267 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:11.267 EAL: Ask a virtual area of 0x61000 bytes 00:06:11.267 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:11.267 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:11.267 EAL: Ask a virtual area of 0x400000000 bytes 00:06:11.267 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:11.267 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:11.267 EAL: Ask a virtual area of 0x61000 bytes 00:06:11.267 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:11.267 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:11.267 EAL: Ask a virtual area of 0x400000000 bytes 00:06:11.267 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:11.267 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:11.267 EAL: Ask a virtual area of 0x61000 bytes 00:06:11.267 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:11.268 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:11.268 EAL: Ask a virtual area of 0x400000000 bytes 00:06:11.268 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:11.268 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:11.268 EAL: Ask a virtual area of 0x61000 bytes 00:06:11.268 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:11.268 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:11.268 EAL: Ask a virtual area of 0x400000000 bytes 00:06:11.268 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:11.268 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:11.268 EAL: Hugepages will be freed exactly as allocated. 00:06:11.268 EAL: No shared files mode enabled, IPC is disabled 00:06:11.268 EAL: No shared files mode enabled, IPC is disabled 00:06:11.268 EAL: TSC frequency is ~2700000 KHz 00:06:11.268 EAL: Main lcore 0 is ready (tid=7fc6ae58ea00;cpuset=[0]) 00:06:11.268 EAL: Trying to obtain current memory policy. 00:06:11.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.268 EAL: Restoring previous memory policy: 0 00:06:11.268 EAL: request: mp_malloc_sync 00:06:11.268 EAL: No shared files mode enabled, IPC is disabled 00:06:11.268 EAL: Heap on socket 0 was expanded by 2MB 00:06:11.268 EAL: No shared files mode enabled, IPC is disabled 00:06:11.268 EAL: No shared files mode enabled, IPC is disabled 00:06:11.268 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:11.268 EAL: Mem event callback 'spdk:(nil)' registered 00:06:11.268 00:06:11.268 00:06:11.268 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.268 http://cunit.sourceforge.net/ 00:06:11.268 00:06:11.268 00:06:11.268 Suite: components_suite 00:06:11.268 Test: vtophys_malloc_test ...passed 00:06:11.268 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:11.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.268 EAL: Restoring previous memory policy: 4 00:06:11.268 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.268 EAL: request: mp_malloc_sync 00:06:11.268 EAL: No shared files mode enabled, IPC is disabled 00:06:11.268 EAL: Heap on socket 0 was expanded by 4MB 00:06:11.268 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.268 EAL: request: mp_malloc_sync 00:06:11.268 EAL: No shared files mode enabled, IPC is disabled 00:06:11.268 EAL: Heap on socket 0 was shrunk by 4MB 00:06:11.268 EAL: Trying to obtain current memory policy. 00:06:11.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.268 EAL: Restoring previous memory policy: 4 00:06:11.268 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.268 EAL: request: mp_malloc_sync 00:06:11.268 EAL: No shared files mode enabled, IPC is disabled 00:06:11.268 EAL: Heap on socket 0 was expanded by 6MB 00:06:11.268 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.268 EAL: request: mp_malloc_sync 00:06:11.268 EAL: No shared files mode enabled, IPC is disabled 00:06:11.268 EAL: Heap on socket 0 was shrunk by 6MB 00:06:11.268 EAL: Trying to obtain current memory policy. 00:06:11.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.268 EAL: Restoring previous memory policy: 4 00:06:11.268 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.268 EAL: request: mp_malloc_sync 00:06:11.268 EAL: No shared files mode enabled, IPC is disabled 00:06:11.268 EAL: Heap on socket 0 was expanded by 10MB 00:06:11.268 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.268 EAL: request: mp_malloc_sync 00:06:11.268 EAL: No shared files mode enabled, IPC is disabled 00:06:11.268 EAL: Heap on socket 0 was shrunk by 10MB 00:06:11.268 EAL: Trying to obtain current memory policy. 00:06:11.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.268 EAL: Restoring previous memory policy: 4 00:06:11.268 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.268 EAL: request: mp_malloc_sync 00:06:11.268 EAL: No shared files mode enabled, IPC is disabled 00:06:11.268 EAL: Heap on socket 0 was expanded by 18MB 00:06:11.268 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.268 EAL: request: mp_malloc_sync 00:06:11.268 EAL: No shared files mode enabled, IPC is disabled 00:06:11.268 EAL: Heap on socket 0 was shrunk by 18MB 00:06:11.268 EAL: Trying to obtain current memory policy. 00:06:11.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.525 EAL: Restoring previous memory policy: 4 00:06:11.525 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.525 EAL: request: mp_malloc_sync 00:06:11.525 EAL: No shared files mode enabled, IPC is disabled 00:06:11.525 EAL: Heap on socket 0 was expanded by 34MB 00:06:11.525 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.525 EAL: request: mp_malloc_sync 00:06:11.525 EAL: No shared files mode enabled, IPC is disabled 00:06:11.525 EAL: Heap on socket 0 was shrunk by 34MB 00:06:11.525 EAL: Trying to obtain current memory policy. 00:06:11.525 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.525 EAL: Restoring previous memory policy: 4 00:06:11.525 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.525 EAL: request: mp_malloc_sync 00:06:11.525 EAL: No shared files mode enabled, IPC is disabled 00:06:11.525 EAL: Heap on socket 0 was expanded by 66MB 00:06:11.525 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.525 EAL: request: mp_malloc_sync 00:06:11.525 EAL: No shared files mode enabled, IPC is disabled 00:06:11.525 EAL: Heap on socket 0 was shrunk by 66MB 00:06:11.525 EAL: Trying to obtain current memory policy. 00:06:11.525 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.525 EAL: Restoring previous memory policy: 4 00:06:11.525 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.525 EAL: request: mp_malloc_sync 00:06:11.525 EAL: No shared files mode enabled, IPC is disabled 00:06:11.525 EAL: Heap on socket 0 was expanded by 130MB 00:06:11.525 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.525 EAL: request: mp_malloc_sync 00:06:11.525 EAL: No shared files mode enabled, IPC is disabled 00:06:11.525 EAL: Heap on socket 0 was shrunk by 130MB 00:06:11.525 EAL: Trying to obtain current memory policy. 00:06:11.525 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.525 EAL: Restoring previous memory policy: 4 00:06:11.525 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.525 EAL: request: mp_malloc_sync 00:06:11.525 EAL: No shared files mode enabled, IPC is disabled 00:06:11.525 EAL: Heap on socket 0 was expanded by 258MB 00:06:11.782 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.782 EAL: request: mp_malloc_sync 00:06:11.782 EAL: No shared files mode enabled, IPC is disabled 00:06:11.782 EAL: Heap on socket 0 was shrunk by 258MB 00:06:11.782 EAL: Trying to obtain current memory policy. 00:06:11.782 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.782 EAL: Restoring previous memory policy: 4 00:06:11.782 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.782 EAL: request: mp_malloc_sync 00:06:11.782 EAL: No shared files mode enabled, IPC is disabled 00:06:11.782 EAL: Heap on socket 0 was expanded by 514MB 00:06:12.039 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.039 EAL: request: mp_malloc_sync 00:06:12.039 EAL: No shared files mode enabled, IPC is disabled 00:06:12.039 EAL: Heap on socket 0 was shrunk by 514MB 00:06:12.039 EAL: Trying to obtain current memory policy. 00:06:12.039 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.296 EAL: Restoring previous memory policy: 4 00:06:12.296 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.296 EAL: request: mp_malloc_sync 00:06:12.296 EAL: No shared files mode enabled, IPC is disabled 00:06:12.296 EAL: Heap on socket 0 was expanded by 1026MB 00:06:12.554 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.812 EAL: request: mp_malloc_sync 00:06:12.812 EAL: No shared files mode enabled, IPC is disabled 00:06:12.812 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:12.812 passed 00:06:12.812 00:06:12.812 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.812 suites 1 1 n/a 0 0 00:06:12.812 tests 2 2 2 0 0 00:06:12.812 asserts 497 497 497 0 n/a 00:06:12.812 00:06:12.812 Elapsed time = 1.423 seconds 00:06:12.812 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.812 EAL: request: mp_malloc_sync 00:06:12.812 EAL: No shared files mode enabled, IPC is disabled 00:06:12.812 EAL: Heap on socket 0 was shrunk by 2MB 00:06:12.812 EAL: No shared files mode enabled, IPC is disabled 00:06:12.812 EAL: No shared files mode enabled, IPC is disabled 00:06:12.812 EAL: No shared files mode enabled, IPC is disabled 00:06:12.812 00:06:12.812 real 0m1.549s 00:06:12.812 user 0m0.903s 00:06:12.812 sys 0m0.612s 00:06:12.812 15:24:25 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.812 15:24:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:12.812 ************************************ 00:06:12.812 END TEST env_vtophys 00:06:12.812 ************************************ 00:06:12.812 15:24:25 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:12.812 15:24:25 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:12.812 15:24:25 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.812 15:24:25 env -- common/autotest_common.sh@10 -- # set +x 00:06:12.812 ************************************ 00:06:12.812 START TEST env_pci 00:06:12.812 ************************************ 00:06:12.812 15:24:25 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:12.812 00:06:12.812 00:06:12.812 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.812 http://cunit.sourceforge.net/ 00:06:12.812 00:06:12.812 00:06:12.812 Suite: pci 00:06:12.812 Test: pci_hook ...[2024-05-15 15:24:25.874786] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1174168 has claimed it 00:06:12.812 EAL: Cannot find device (10000:00:01.0) 00:06:12.812 EAL: Failed to attach device on primary process 00:06:12.812 passed 00:06:12.812 00:06:12.812 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.812 suites 1 1 n/a 0 0 00:06:12.812 tests 1 1 1 0 0 00:06:12.812 asserts 25 25 25 0 n/a 00:06:12.812 00:06:12.812 Elapsed time = 0.027 seconds 00:06:12.812 00:06:12.812 real 0m0.039s 00:06:12.812 user 0m0.012s 00:06:12.812 sys 0m0.027s 00:06:12.812 15:24:25 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.812 15:24:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:12.812 ************************************ 00:06:12.812 END TEST env_pci 00:06:12.812 ************************************ 00:06:13.070 15:24:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:13.070 15:24:25 env -- env/env.sh@15 -- # uname 00:06:13.070 15:24:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:13.070 15:24:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:13.070 15:24:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:13.070 15:24:25 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:06:13.070 15:24:25 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.070 15:24:25 env -- common/autotest_common.sh@10 -- # set +x 00:06:13.071 ************************************ 00:06:13.071 START TEST env_dpdk_post_init 00:06:13.071 ************************************ 00:06:13.071 15:24:25 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:13.071 EAL: Detected CPU lcores: 48 00:06:13.071 EAL: Detected NUMA nodes: 2 00:06:13.071 EAL: Detected shared linkage of DPDK 00:06:13.071 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:13.071 EAL: Selected IOVA mode 'VA' 00:06:13.071 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.071 EAL: VFIO support initialized 00:06:13.071 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:13.071 EAL: Using IOMMU type 1 (Type 1) 00:06:13.071 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:06:13.071 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:06:13.071 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:06:13.071 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:06:13.071 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:06:13.071 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:06:13.071 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:06:13.071 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:06:14.005 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:06:14.005 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:06:14.005 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:06:14.005 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:06:14.005 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:06:14.005 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:06:14.005 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:06:14.005 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:06:14.005 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:06:17.284 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:06:17.284 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:06:17.284 Starting DPDK initialization... 00:06:17.284 Starting SPDK post initialization... 00:06:17.284 SPDK NVMe probe 00:06:17.284 Attaching to 0000:0b:00.0 00:06:17.284 Attached to 0000:0b:00.0 00:06:17.284 Cleaning up... 00:06:17.284 00:06:17.284 real 0m4.381s 00:06:17.284 user 0m3.241s 00:06:17.284 sys 0m0.194s 00:06:17.284 15:24:30 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.284 15:24:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:17.284 ************************************ 00:06:17.284 END TEST env_dpdk_post_init 00:06:17.284 ************************************ 00:06:17.284 15:24:30 env -- env/env.sh@26 -- # uname 00:06:17.284 15:24:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:17.284 15:24:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:17.284 15:24:30 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:17.284 15:24:30 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.284 15:24:30 env -- common/autotest_common.sh@10 -- # set +x 00:06:17.543 ************************************ 00:06:17.543 START TEST env_mem_callbacks 00:06:17.543 ************************************ 00:06:17.543 15:24:30 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:17.543 EAL: Detected CPU lcores: 48 00:06:17.543 EAL: Detected NUMA nodes: 2 00:06:17.543 EAL: Detected shared linkage of DPDK 00:06:17.543 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:17.543 EAL: Selected IOVA mode 'VA' 00:06:17.543 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.543 EAL: VFIO support initialized 00:06:17.543 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:17.543 00:06:17.543 00:06:17.543 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.543 http://cunit.sourceforge.net/ 00:06:17.543 00:06:17.543 00:06:17.543 Suite: memory 00:06:17.543 Test: test ... 00:06:17.543 register 0x200000200000 2097152 00:06:17.543 malloc 3145728 00:06:17.543 register 0x200000400000 4194304 00:06:17.543 buf 0x200000500000 len 3145728 PASSED 00:06:17.543 malloc 64 00:06:17.543 buf 0x2000004fff40 len 64 PASSED 00:06:17.543 malloc 4194304 00:06:17.543 register 0x200000800000 6291456 00:06:17.543 buf 0x200000a00000 len 4194304 PASSED 00:06:17.543 free 0x200000500000 3145728 00:06:17.543 free 0x2000004fff40 64 00:06:17.543 unregister 0x200000400000 4194304 PASSED 00:06:17.543 free 0x200000a00000 4194304 00:06:17.543 unregister 0x200000800000 6291456 PASSED 00:06:17.543 malloc 8388608 00:06:17.543 register 0x200000400000 10485760 00:06:17.543 buf 0x200000600000 len 8388608 PASSED 00:06:17.543 free 0x200000600000 8388608 00:06:17.543 unregister 0x200000400000 10485760 PASSED 00:06:17.543 passed 00:06:17.543 00:06:17.543 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.543 suites 1 1 n/a 0 0 00:06:17.543 tests 1 1 1 0 0 00:06:17.543 asserts 15 15 15 0 n/a 00:06:17.543 00:06:17.543 Elapsed time = 0.005 seconds 00:06:17.543 00:06:17.543 real 0m0.055s 00:06:17.543 user 0m0.014s 00:06:17.543 sys 0m0.040s 00:06:17.543 15:24:30 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.543 15:24:30 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:17.543 ************************************ 00:06:17.543 END TEST env_mem_callbacks 00:06:17.543 ************************************ 00:06:17.543 00:06:17.543 real 0m6.487s 00:06:17.543 user 0m4.429s 00:06:17.543 sys 0m1.089s 00:06:17.543 15:24:30 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.543 15:24:30 env -- common/autotest_common.sh@10 -- # set +x 00:06:17.543 ************************************ 00:06:17.543 END TEST env 00:06:17.543 ************************************ 00:06:17.543 15:24:30 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:17.543 15:24:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:17.543 15:24:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.543 15:24:30 -- common/autotest_common.sh@10 -- # set +x 00:06:17.543 ************************************ 00:06:17.543 START TEST rpc 00:06:17.543 ************************************ 00:06:17.543 15:24:30 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:17.543 * Looking for test storage... 00:06:17.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:17.543 15:24:30 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1174832 00:06:17.543 15:24:30 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:17.543 15:24:30 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:17.543 15:24:30 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1174832 00:06:17.543 15:24:30 rpc -- common/autotest_common.sh@827 -- # '[' -z 1174832 ']' 00:06:17.543 15:24:30 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.543 15:24:30 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:17.543 15:24:30 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.543 15:24:30 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:17.543 15:24:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.543 [2024-05-15 15:24:30.614756] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:06:17.543 [2024-05-15 15:24:30.614847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1174832 ] 00:06:17.801 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.801 [2024-05-15 15:24:30.655886] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:17.801 [2024-05-15 15:24:30.692213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.801 [2024-05-15 15:24:30.782521] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:17.801 [2024-05-15 15:24:30.782584] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1174832' to capture a snapshot of events at runtime. 00:06:17.801 [2024-05-15 15:24:30.782611] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:17.801 [2024-05-15 15:24:30.782624] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:17.801 [2024-05-15 15:24:30.782636] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1174832 for offline analysis/debug. 00:06:17.801 [2024-05-15 15:24:30.782668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.058 15:24:31 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:18.058 15:24:31 rpc -- common/autotest_common.sh@860 -- # return 0 00:06:18.058 15:24:31 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:18.058 15:24:31 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:18.058 15:24:31 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:18.058 15:24:31 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:18.058 15:24:31 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:18.058 15:24:31 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.058 15:24:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.058 ************************************ 00:06:18.058 START TEST rpc_integrity 00:06:18.058 ************************************ 00:06:18.058 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:06:18.058 15:24:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:18.058 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.058 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.058 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.058 15:24:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:18.058 15:24:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:18.058 15:24:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:18.058 15:24:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:18.058 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.058 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.058 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.058 15:24:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:18.058 15:24:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:18.058 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.058 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.058 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.058 15:24:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:18.058 { 00:06:18.059 "name": "Malloc0", 00:06:18.059 "aliases": [ 00:06:18.059 "70816c6b-50e4-4d9b-9638-7a4a358d0146" 00:06:18.059 ], 00:06:18.059 "product_name": "Malloc disk", 00:06:18.059 "block_size": 512, 00:06:18.059 "num_blocks": 16384, 00:06:18.059 "uuid": "70816c6b-50e4-4d9b-9638-7a4a358d0146", 00:06:18.059 "assigned_rate_limits": { 00:06:18.059 "rw_ios_per_sec": 0, 00:06:18.059 "rw_mbytes_per_sec": 0, 00:06:18.059 "r_mbytes_per_sec": 0, 00:06:18.059 "w_mbytes_per_sec": 0 00:06:18.059 }, 00:06:18.059 "claimed": false, 00:06:18.059 "zoned": false, 00:06:18.059 "supported_io_types": { 00:06:18.059 "read": true, 00:06:18.059 "write": true, 00:06:18.059 "unmap": true, 00:06:18.059 "write_zeroes": true, 00:06:18.059 "flush": true, 00:06:18.059 "reset": true, 00:06:18.059 "compare": false, 00:06:18.059 "compare_and_write": false, 00:06:18.059 "abort": true, 00:06:18.059 "nvme_admin": false, 00:06:18.059 "nvme_io": false 00:06:18.059 }, 00:06:18.059 "memory_domains": [ 00:06:18.059 { 00:06:18.059 "dma_device_id": "system", 00:06:18.059 "dma_device_type": 1 00:06:18.059 }, 00:06:18.059 { 00:06:18.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:18.059 "dma_device_type": 2 00:06:18.059 } 00:06:18.059 ], 00:06:18.059 "driver_specific": {} 00:06:18.059 } 00:06:18.059 ]' 00:06:18.059 15:24:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:18.316 15:24:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:18.316 15:24:31 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:18.316 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.316 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.316 [2024-05-15 15:24:31.177873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:18.316 [2024-05-15 15:24:31.177918] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:18.316 [2024-05-15 15:24:31.177942] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ac5160 00:06:18.316 [2024-05-15 15:24:31.177957] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:18.316 [2024-05-15 15:24:31.179453] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:18.316 [2024-05-15 15:24:31.179479] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:18.316 Passthru0 00:06:18.316 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.316 15:24:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:18.316 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.316 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.316 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.316 15:24:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:18.316 { 00:06:18.316 "name": "Malloc0", 00:06:18.316 "aliases": [ 00:06:18.316 "70816c6b-50e4-4d9b-9638-7a4a358d0146" 00:06:18.316 ], 00:06:18.316 "product_name": "Malloc disk", 00:06:18.316 "block_size": 512, 00:06:18.316 "num_blocks": 16384, 00:06:18.316 "uuid": "70816c6b-50e4-4d9b-9638-7a4a358d0146", 00:06:18.316 "assigned_rate_limits": { 00:06:18.316 "rw_ios_per_sec": 0, 00:06:18.316 "rw_mbytes_per_sec": 0, 00:06:18.316 "r_mbytes_per_sec": 0, 00:06:18.316 "w_mbytes_per_sec": 0 00:06:18.316 }, 00:06:18.316 "claimed": true, 00:06:18.316 "claim_type": "exclusive_write", 00:06:18.316 "zoned": false, 00:06:18.316 "supported_io_types": { 00:06:18.316 "read": true, 00:06:18.316 "write": true, 00:06:18.316 "unmap": true, 00:06:18.316 "write_zeroes": true, 00:06:18.316 "flush": true, 00:06:18.316 "reset": true, 00:06:18.316 "compare": false, 00:06:18.316 "compare_and_write": false, 00:06:18.316 "abort": true, 00:06:18.316 "nvme_admin": false, 00:06:18.316 "nvme_io": false 00:06:18.316 }, 00:06:18.316 "memory_domains": [ 00:06:18.316 { 00:06:18.316 "dma_device_id": "system", 00:06:18.316 "dma_device_type": 1 00:06:18.316 }, 00:06:18.316 { 00:06:18.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:18.316 "dma_device_type": 2 00:06:18.316 } 00:06:18.316 ], 00:06:18.316 "driver_specific": {} 00:06:18.316 }, 00:06:18.316 { 00:06:18.316 "name": "Passthru0", 00:06:18.316 "aliases": [ 00:06:18.316 "b83f87bf-1e0f-5c4a-9fe9-43f89a41d6da" 00:06:18.316 ], 00:06:18.316 "product_name": "passthru", 00:06:18.316 "block_size": 512, 00:06:18.316 "num_blocks": 16384, 00:06:18.316 "uuid": "b83f87bf-1e0f-5c4a-9fe9-43f89a41d6da", 00:06:18.316 "assigned_rate_limits": { 00:06:18.316 "rw_ios_per_sec": 0, 00:06:18.316 "rw_mbytes_per_sec": 0, 00:06:18.316 "r_mbytes_per_sec": 0, 00:06:18.316 "w_mbytes_per_sec": 0 00:06:18.316 }, 00:06:18.316 "claimed": false, 00:06:18.316 "zoned": false, 00:06:18.316 "supported_io_types": { 00:06:18.316 "read": true, 00:06:18.316 "write": true, 00:06:18.316 "unmap": true, 00:06:18.316 "write_zeroes": true, 00:06:18.316 "flush": true, 00:06:18.316 "reset": true, 00:06:18.316 "compare": false, 00:06:18.316 "compare_and_write": false, 00:06:18.316 "abort": true, 00:06:18.316 "nvme_admin": false, 00:06:18.316 "nvme_io": false 00:06:18.317 }, 00:06:18.317 "memory_domains": [ 00:06:18.317 { 00:06:18.317 "dma_device_id": "system", 00:06:18.317 "dma_device_type": 1 00:06:18.317 }, 00:06:18.317 { 00:06:18.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:18.317 "dma_device_type": 2 00:06:18.317 } 00:06:18.317 ], 00:06:18.317 "driver_specific": { 00:06:18.317 "passthru": { 00:06:18.317 "name": "Passthru0", 00:06:18.317 "base_bdev_name": "Malloc0" 00:06:18.317 } 00:06:18.317 } 00:06:18.317 } 00:06:18.317 ]' 00:06:18.317 15:24:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:18.317 15:24:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:18.317 15:24:31 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:18.317 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.317 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.317 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.317 15:24:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:18.317 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.317 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.317 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.317 15:24:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:18.317 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.317 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.317 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.317 15:24:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:18.317 15:24:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:18.317 15:24:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:18.317 00:06:18.317 real 0m0.226s 00:06:18.317 user 0m0.146s 00:06:18.317 sys 0m0.024s 00:06:18.317 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.317 15:24:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.317 ************************************ 00:06:18.317 END TEST rpc_integrity 00:06:18.317 ************************************ 00:06:18.317 15:24:31 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:18.317 15:24:31 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:18.317 15:24:31 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.317 15:24:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.317 ************************************ 00:06:18.317 START TEST rpc_plugins 00:06:18.317 ************************************ 00:06:18.317 15:24:31 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:06:18.317 15:24:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:18.317 15:24:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.317 15:24:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:18.317 15:24:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.317 15:24:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:18.317 15:24:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:18.317 15:24:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.317 15:24:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:18.317 15:24:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.317 15:24:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:18.317 { 00:06:18.317 "name": "Malloc1", 00:06:18.317 "aliases": [ 00:06:18.317 "d10348c2-5f87-451f-9236-30bbf243571d" 00:06:18.317 ], 00:06:18.317 "product_name": "Malloc disk", 00:06:18.317 "block_size": 4096, 00:06:18.317 "num_blocks": 256, 00:06:18.317 "uuid": "d10348c2-5f87-451f-9236-30bbf243571d", 00:06:18.317 "assigned_rate_limits": { 00:06:18.317 "rw_ios_per_sec": 0, 00:06:18.317 "rw_mbytes_per_sec": 0, 00:06:18.317 "r_mbytes_per_sec": 0, 00:06:18.317 "w_mbytes_per_sec": 0 00:06:18.317 }, 00:06:18.317 "claimed": false, 00:06:18.317 "zoned": false, 00:06:18.317 "supported_io_types": { 00:06:18.317 "read": true, 00:06:18.317 "write": true, 00:06:18.317 "unmap": true, 00:06:18.317 "write_zeroes": true, 00:06:18.317 "flush": true, 00:06:18.317 "reset": true, 00:06:18.317 "compare": false, 00:06:18.317 "compare_and_write": false, 00:06:18.317 "abort": true, 00:06:18.317 "nvme_admin": false, 00:06:18.317 "nvme_io": false 00:06:18.317 }, 00:06:18.317 "memory_domains": [ 00:06:18.317 { 00:06:18.317 "dma_device_id": "system", 00:06:18.317 "dma_device_type": 1 00:06:18.317 }, 00:06:18.317 { 00:06:18.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:18.317 "dma_device_type": 2 00:06:18.317 } 00:06:18.317 ], 00:06:18.317 "driver_specific": {} 00:06:18.317 } 00:06:18.317 ]' 00:06:18.317 15:24:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:18.575 15:24:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:18.575 15:24:31 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:18.575 15:24:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.575 15:24:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:18.575 15:24:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.575 15:24:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:18.575 15:24:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.575 15:24:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:18.575 15:24:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.575 15:24:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:18.575 15:24:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:18.575 15:24:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:18.575 00:06:18.575 real 0m0.114s 00:06:18.575 user 0m0.071s 00:06:18.575 sys 0m0.013s 00:06:18.575 15:24:31 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.575 15:24:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:18.575 ************************************ 00:06:18.575 END TEST rpc_plugins 00:06:18.575 ************************************ 00:06:18.575 15:24:31 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:18.575 15:24:31 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:18.575 15:24:31 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.575 15:24:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.575 ************************************ 00:06:18.575 START TEST rpc_trace_cmd_test 00:06:18.575 ************************************ 00:06:18.575 15:24:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:06:18.575 15:24:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:18.575 15:24:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:18.575 15:24:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.575 15:24:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.575 15:24:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.575 15:24:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:18.575 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1174832", 00:06:18.575 "tpoint_group_mask": "0x8", 00:06:18.575 "iscsi_conn": { 00:06:18.575 "mask": "0x2", 00:06:18.575 "tpoint_mask": "0x0" 00:06:18.575 }, 00:06:18.575 "scsi": { 00:06:18.575 "mask": "0x4", 00:06:18.575 "tpoint_mask": "0x0" 00:06:18.575 }, 00:06:18.575 "bdev": { 00:06:18.575 "mask": "0x8", 00:06:18.575 "tpoint_mask": "0xffffffffffffffff" 00:06:18.575 }, 00:06:18.575 "nvmf_rdma": { 00:06:18.575 "mask": "0x10", 00:06:18.575 "tpoint_mask": "0x0" 00:06:18.575 }, 00:06:18.575 "nvmf_tcp": { 00:06:18.575 "mask": "0x20", 00:06:18.575 "tpoint_mask": "0x0" 00:06:18.575 }, 00:06:18.575 "ftl": { 00:06:18.575 "mask": "0x40", 00:06:18.575 "tpoint_mask": "0x0" 00:06:18.575 }, 00:06:18.575 "blobfs": { 00:06:18.575 "mask": "0x80", 00:06:18.575 "tpoint_mask": "0x0" 00:06:18.575 }, 00:06:18.575 "dsa": { 00:06:18.575 "mask": "0x200", 00:06:18.575 "tpoint_mask": "0x0" 00:06:18.575 }, 00:06:18.575 "thread": { 00:06:18.575 "mask": "0x400", 00:06:18.575 "tpoint_mask": "0x0" 00:06:18.575 }, 00:06:18.575 "nvme_pcie": { 00:06:18.575 "mask": "0x800", 00:06:18.575 "tpoint_mask": "0x0" 00:06:18.575 }, 00:06:18.575 "iaa": { 00:06:18.575 "mask": "0x1000", 00:06:18.575 "tpoint_mask": "0x0" 00:06:18.575 }, 00:06:18.575 "nvme_tcp": { 00:06:18.575 "mask": "0x2000", 00:06:18.575 "tpoint_mask": "0x0" 00:06:18.575 }, 00:06:18.575 "bdev_nvme": { 00:06:18.575 "mask": "0x4000", 00:06:18.575 "tpoint_mask": "0x0" 00:06:18.575 }, 00:06:18.575 "sock": { 00:06:18.575 "mask": "0x8000", 00:06:18.575 "tpoint_mask": "0x0" 00:06:18.575 } 00:06:18.575 }' 00:06:18.575 15:24:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:18.575 15:24:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:18.575 15:24:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:18.575 15:24:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:18.575 15:24:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:18.575 15:24:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:18.575 15:24:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:18.833 15:24:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:18.833 15:24:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:18.833 15:24:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:18.833 00:06:18.833 real 0m0.203s 00:06:18.833 user 0m0.178s 00:06:18.833 sys 0m0.016s 00:06:18.833 15:24:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.833 15:24:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.834 ************************************ 00:06:18.834 END TEST rpc_trace_cmd_test 00:06:18.834 ************************************ 00:06:18.834 15:24:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:18.834 15:24:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:18.834 15:24:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:18.834 15:24:31 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:18.834 15:24:31 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.834 15:24:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.834 ************************************ 00:06:18.834 START TEST rpc_daemon_integrity 00:06:18.834 ************************************ 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:18.834 { 00:06:18.834 "name": "Malloc2", 00:06:18.834 "aliases": [ 00:06:18.834 "655ef79d-3da9-4879-bdf3-591bb30dbb11" 00:06:18.834 ], 00:06:18.834 "product_name": "Malloc disk", 00:06:18.834 "block_size": 512, 00:06:18.834 "num_blocks": 16384, 00:06:18.834 "uuid": "655ef79d-3da9-4879-bdf3-591bb30dbb11", 00:06:18.834 "assigned_rate_limits": { 00:06:18.834 "rw_ios_per_sec": 0, 00:06:18.834 "rw_mbytes_per_sec": 0, 00:06:18.834 "r_mbytes_per_sec": 0, 00:06:18.834 "w_mbytes_per_sec": 0 00:06:18.834 }, 00:06:18.834 "claimed": false, 00:06:18.834 "zoned": false, 00:06:18.834 "supported_io_types": { 00:06:18.834 "read": true, 00:06:18.834 "write": true, 00:06:18.834 "unmap": true, 00:06:18.834 "write_zeroes": true, 00:06:18.834 "flush": true, 00:06:18.834 "reset": true, 00:06:18.834 "compare": false, 00:06:18.834 "compare_and_write": false, 00:06:18.834 "abort": true, 00:06:18.834 "nvme_admin": false, 00:06:18.834 "nvme_io": false 00:06:18.834 }, 00:06:18.834 "memory_domains": [ 00:06:18.834 { 00:06:18.834 "dma_device_id": "system", 00:06:18.834 "dma_device_type": 1 00:06:18.834 }, 00:06:18.834 { 00:06:18.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:18.834 "dma_device_type": 2 00:06:18.834 } 00:06:18.834 ], 00:06:18.834 "driver_specific": {} 00:06:18.834 } 00:06:18.834 ]' 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.834 [2024-05-15 15:24:31.883999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:18.834 [2024-05-15 15:24:31.884044] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:18.834 [2024-05-15 15:24:31.884068] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c76a20 00:06:18.834 [2024-05-15 15:24:31.884083] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:18.834 [2024-05-15 15:24:31.885442] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:18.834 [2024-05-15 15:24:31.885470] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:18.834 Passthru0 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:18.834 { 00:06:18.834 "name": "Malloc2", 00:06:18.834 "aliases": [ 00:06:18.834 "655ef79d-3da9-4879-bdf3-591bb30dbb11" 00:06:18.834 ], 00:06:18.834 "product_name": "Malloc disk", 00:06:18.834 "block_size": 512, 00:06:18.834 "num_blocks": 16384, 00:06:18.834 "uuid": "655ef79d-3da9-4879-bdf3-591bb30dbb11", 00:06:18.834 "assigned_rate_limits": { 00:06:18.834 "rw_ios_per_sec": 0, 00:06:18.834 "rw_mbytes_per_sec": 0, 00:06:18.834 "r_mbytes_per_sec": 0, 00:06:18.834 "w_mbytes_per_sec": 0 00:06:18.834 }, 00:06:18.834 "claimed": true, 00:06:18.834 "claim_type": "exclusive_write", 00:06:18.834 "zoned": false, 00:06:18.834 "supported_io_types": { 00:06:18.834 "read": true, 00:06:18.834 "write": true, 00:06:18.834 "unmap": true, 00:06:18.834 "write_zeroes": true, 00:06:18.834 "flush": true, 00:06:18.834 "reset": true, 00:06:18.834 "compare": false, 00:06:18.834 "compare_and_write": false, 00:06:18.834 "abort": true, 00:06:18.834 "nvme_admin": false, 00:06:18.834 "nvme_io": false 00:06:18.834 }, 00:06:18.834 "memory_domains": [ 00:06:18.834 { 00:06:18.834 "dma_device_id": "system", 00:06:18.834 "dma_device_type": 1 00:06:18.834 }, 00:06:18.834 { 00:06:18.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:18.834 "dma_device_type": 2 00:06:18.834 } 00:06:18.834 ], 00:06:18.834 "driver_specific": {} 00:06:18.834 }, 00:06:18.834 { 00:06:18.834 "name": "Passthru0", 00:06:18.834 "aliases": [ 00:06:18.834 "1ecf8702-f9c0-5fa8-90df-9e4ba347e0db" 00:06:18.834 ], 00:06:18.834 "product_name": "passthru", 00:06:18.834 "block_size": 512, 00:06:18.834 "num_blocks": 16384, 00:06:18.834 "uuid": "1ecf8702-f9c0-5fa8-90df-9e4ba347e0db", 00:06:18.834 "assigned_rate_limits": { 00:06:18.834 "rw_ios_per_sec": 0, 00:06:18.834 "rw_mbytes_per_sec": 0, 00:06:18.834 "r_mbytes_per_sec": 0, 00:06:18.834 "w_mbytes_per_sec": 0 00:06:18.834 }, 00:06:18.834 "claimed": false, 00:06:18.834 "zoned": false, 00:06:18.834 "supported_io_types": { 00:06:18.834 "read": true, 00:06:18.834 "write": true, 00:06:18.834 "unmap": true, 00:06:18.834 "write_zeroes": true, 00:06:18.834 "flush": true, 00:06:18.834 "reset": true, 00:06:18.834 "compare": false, 00:06:18.834 "compare_and_write": false, 00:06:18.834 "abort": true, 00:06:18.834 "nvme_admin": false, 00:06:18.834 "nvme_io": false 00:06:18.834 }, 00:06:18.834 "memory_domains": [ 00:06:18.834 { 00:06:18.834 "dma_device_id": "system", 00:06:18.834 "dma_device_type": 1 00:06:18.834 }, 00:06:18.834 { 00:06:18.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:18.834 "dma_device_type": 2 00:06:18.834 } 00:06:18.834 ], 00:06:18.834 "driver_specific": { 00:06:18.834 "passthru": { 00:06:18.834 "name": "Passthru0", 00:06:18.834 "base_bdev_name": "Malloc2" 00:06:18.834 } 00:06:18.834 } 00:06:18.834 } 00:06:18.834 ]' 00:06:18.834 15:24:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:19.124 15:24:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:19.124 15:24:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:19.124 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.124 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.124 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.124 15:24:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:19.124 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.124 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.124 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.124 15:24:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:19.124 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.124 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.124 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.124 15:24:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:19.124 15:24:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:19.124 15:24:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:19.124 00:06:19.124 real 0m0.222s 00:06:19.124 user 0m0.141s 00:06:19.124 sys 0m0.023s 00:06:19.124 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:19.124 15:24:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.124 ************************************ 00:06:19.124 END TEST rpc_daemon_integrity 00:06:19.124 ************************************ 00:06:19.124 15:24:32 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:19.124 15:24:32 rpc -- rpc/rpc.sh@84 -- # killprocess 1174832 00:06:19.124 15:24:32 rpc -- common/autotest_common.sh@946 -- # '[' -z 1174832 ']' 00:06:19.124 15:24:32 rpc -- common/autotest_common.sh@950 -- # kill -0 1174832 00:06:19.124 15:24:32 rpc -- common/autotest_common.sh@951 -- # uname 00:06:19.124 15:24:32 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:19.124 15:24:32 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1174832 00:06:19.124 15:24:32 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:19.124 15:24:32 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:19.124 15:24:32 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1174832' 00:06:19.124 killing process with pid 1174832 00:06:19.124 15:24:32 rpc -- common/autotest_common.sh@965 -- # kill 1174832 00:06:19.124 15:24:32 rpc -- common/autotest_common.sh@970 -- # wait 1174832 00:06:19.384 00:06:19.384 real 0m1.943s 00:06:19.384 user 0m2.385s 00:06:19.384 sys 0m0.640s 00:06:19.384 15:24:32 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:19.384 15:24:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.384 ************************************ 00:06:19.384 END TEST rpc 00:06:19.384 ************************************ 00:06:19.643 15:24:32 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:19.643 15:24:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:19.643 15:24:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.643 15:24:32 -- common/autotest_common.sh@10 -- # set +x 00:06:19.643 ************************************ 00:06:19.643 START TEST skip_rpc 00:06:19.643 ************************************ 00:06:19.643 15:24:32 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:19.643 * Looking for test storage... 00:06:19.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:19.643 15:24:32 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:19.643 15:24:32 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:19.643 15:24:32 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:19.643 15:24:32 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:19.643 15:24:32 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.643 15:24:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.643 ************************************ 00:06:19.643 START TEST skip_rpc 00:06:19.643 ************************************ 00:06:19.643 15:24:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:06:19.643 15:24:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1175267 00:06:19.643 15:24:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:19.643 15:24:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.643 15:24:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:19.643 [2024-05-15 15:24:32.645603] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:06:19.643 [2024-05-15 15:24:32.645683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1175267 ] 00:06:19.643 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.643 [2024-05-15 15:24:32.679272] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:19.643 [2024-05-15 15:24:32.716801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.901 [2024-05-15 15:24:32.805334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1175267 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 1175267 ']' 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 1175267 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1175267 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1175267' 00:06:25.164 killing process with pid 1175267 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 1175267 00:06:25.164 15:24:37 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 1175267 00:06:25.164 00:06:25.164 real 0m5.460s 00:06:25.164 user 0m5.149s 00:06:25.164 sys 0m0.318s 00:06:25.164 15:24:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.164 15:24:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.164 ************************************ 00:06:25.164 END TEST skip_rpc 00:06:25.164 ************************************ 00:06:25.164 15:24:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:25.164 15:24:38 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:25.164 15:24:38 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.164 15:24:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.164 ************************************ 00:06:25.164 START TEST skip_rpc_with_json 00:06:25.164 ************************************ 00:06:25.164 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:06:25.164 15:24:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:25.164 15:24:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1175953 00:06:25.164 15:24:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.164 15:24:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:25.164 15:24:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1175953 00:06:25.164 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 1175953 ']' 00:06:25.164 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.164 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:25.164 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.164 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:25.164 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:25.164 [2024-05-15 15:24:38.162921] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:06:25.164 [2024-05-15 15:24:38.163004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1175953 ] 00:06:25.164 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.164 [2024-05-15 15:24:38.200542] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:25.164 [2024-05-15 15:24:38.231206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.422 [2024-05-15 15:24:38.313358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.680 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:25.680 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:06:25.680 15:24:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:25.680 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.680 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:25.680 [2024-05-15 15:24:38.568399] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:25.680 request: 00:06:25.680 { 00:06:25.680 "trtype": "tcp", 00:06:25.680 "method": "nvmf_get_transports", 00:06:25.680 "req_id": 1 00:06:25.680 } 00:06:25.680 Got JSON-RPC error response 00:06:25.680 response: 00:06:25.680 { 00:06:25.680 "code": -19, 00:06:25.680 "message": "No such device" 00:06:25.680 } 00:06:25.680 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:25.680 15:24:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:25.680 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.680 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:25.680 [2024-05-15 15:24:38.576528] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.680 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.680 15:24:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:25.680 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.680 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:25.680 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.680 15:24:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:25.680 { 00:06:25.680 "subsystems": [ 00:06:25.680 { 00:06:25.680 "subsystem": "vfio_user_target", 00:06:25.680 "config": null 00:06:25.680 }, 00:06:25.680 { 00:06:25.680 "subsystem": "keyring", 00:06:25.680 "config": [] 00:06:25.680 }, 00:06:25.680 { 00:06:25.680 "subsystem": "iobuf", 00:06:25.680 "config": [ 00:06:25.680 { 00:06:25.680 "method": "iobuf_set_options", 00:06:25.680 "params": { 00:06:25.680 "small_pool_count": 8192, 00:06:25.680 "large_pool_count": 1024, 00:06:25.680 "small_bufsize": 8192, 00:06:25.680 "large_bufsize": 135168 00:06:25.680 } 00:06:25.680 } 00:06:25.680 ] 00:06:25.680 }, 00:06:25.680 { 00:06:25.680 "subsystem": "sock", 00:06:25.680 "config": [ 00:06:25.680 { 00:06:25.680 "method": "sock_impl_set_options", 00:06:25.680 "params": { 00:06:25.680 "impl_name": "posix", 00:06:25.680 "recv_buf_size": 2097152, 00:06:25.680 "send_buf_size": 2097152, 00:06:25.680 "enable_recv_pipe": true, 00:06:25.680 "enable_quickack": false, 00:06:25.680 "enable_placement_id": 0, 00:06:25.680 "enable_zerocopy_send_server": true, 00:06:25.680 "enable_zerocopy_send_client": false, 00:06:25.680 "zerocopy_threshold": 0, 00:06:25.680 "tls_version": 0, 00:06:25.680 "enable_ktls": false 00:06:25.680 } 00:06:25.680 }, 00:06:25.680 { 00:06:25.680 "method": "sock_impl_set_options", 00:06:25.680 "params": { 00:06:25.680 "impl_name": "ssl", 00:06:25.680 "recv_buf_size": 4096, 00:06:25.680 "send_buf_size": 4096, 00:06:25.680 "enable_recv_pipe": true, 00:06:25.680 "enable_quickack": false, 00:06:25.680 "enable_placement_id": 0, 00:06:25.680 "enable_zerocopy_send_server": true, 00:06:25.680 "enable_zerocopy_send_client": false, 00:06:25.680 "zerocopy_threshold": 0, 00:06:25.680 "tls_version": 0, 00:06:25.680 "enable_ktls": false 00:06:25.680 } 00:06:25.680 } 00:06:25.680 ] 00:06:25.680 }, 00:06:25.680 { 00:06:25.680 "subsystem": "vmd", 00:06:25.680 "config": [] 00:06:25.680 }, 00:06:25.680 { 00:06:25.680 "subsystem": "accel", 00:06:25.680 "config": [ 00:06:25.680 { 00:06:25.680 "method": "accel_set_options", 00:06:25.680 "params": { 00:06:25.680 "small_cache_size": 128, 00:06:25.680 "large_cache_size": 16, 00:06:25.680 "task_count": 2048, 00:06:25.680 "sequence_count": 2048, 00:06:25.680 "buf_count": 2048 00:06:25.680 } 00:06:25.680 } 00:06:25.680 ] 00:06:25.680 }, 00:06:25.680 { 00:06:25.680 "subsystem": "bdev", 00:06:25.680 "config": [ 00:06:25.680 { 00:06:25.680 "method": "bdev_set_options", 00:06:25.680 "params": { 00:06:25.681 "bdev_io_pool_size": 65535, 00:06:25.681 "bdev_io_cache_size": 256, 00:06:25.681 "bdev_auto_examine": true, 00:06:25.681 "iobuf_small_cache_size": 128, 00:06:25.681 "iobuf_large_cache_size": 16 00:06:25.681 } 00:06:25.681 }, 00:06:25.681 { 00:06:25.681 "method": "bdev_raid_set_options", 00:06:25.681 "params": { 00:06:25.681 "process_window_size_kb": 1024 00:06:25.681 } 00:06:25.681 }, 00:06:25.681 { 00:06:25.681 "method": "bdev_iscsi_set_options", 00:06:25.681 "params": { 00:06:25.681 "timeout_sec": 30 00:06:25.681 } 00:06:25.681 }, 00:06:25.681 { 00:06:25.681 "method": "bdev_nvme_set_options", 00:06:25.681 "params": { 00:06:25.681 "action_on_timeout": "none", 00:06:25.681 "timeout_us": 0, 00:06:25.681 "timeout_admin_us": 0, 00:06:25.681 "keep_alive_timeout_ms": 10000, 00:06:25.681 "arbitration_burst": 0, 00:06:25.681 "low_priority_weight": 0, 00:06:25.681 "medium_priority_weight": 0, 00:06:25.681 "high_priority_weight": 0, 00:06:25.681 "nvme_adminq_poll_period_us": 10000, 00:06:25.681 "nvme_ioq_poll_period_us": 0, 00:06:25.681 "io_queue_requests": 0, 00:06:25.681 "delay_cmd_submit": true, 00:06:25.681 "transport_retry_count": 4, 00:06:25.681 "bdev_retry_count": 3, 00:06:25.681 "transport_ack_timeout": 0, 00:06:25.681 "ctrlr_loss_timeout_sec": 0, 00:06:25.681 "reconnect_delay_sec": 0, 00:06:25.681 "fast_io_fail_timeout_sec": 0, 00:06:25.681 "disable_auto_failback": false, 00:06:25.681 "generate_uuids": false, 00:06:25.681 "transport_tos": 0, 00:06:25.681 "nvme_error_stat": false, 00:06:25.681 "rdma_srq_size": 0, 00:06:25.681 "io_path_stat": false, 00:06:25.681 "allow_accel_sequence": false, 00:06:25.681 "rdma_max_cq_size": 0, 00:06:25.681 "rdma_cm_event_timeout_ms": 0, 00:06:25.681 "dhchap_digests": [ 00:06:25.681 "sha256", 00:06:25.681 "sha384", 00:06:25.681 "sha512" 00:06:25.681 ], 00:06:25.681 "dhchap_dhgroups": [ 00:06:25.681 "null", 00:06:25.681 "ffdhe2048", 00:06:25.681 "ffdhe3072", 00:06:25.681 "ffdhe4096", 00:06:25.681 "ffdhe6144", 00:06:25.681 "ffdhe8192" 00:06:25.681 ] 00:06:25.681 } 00:06:25.681 }, 00:06:25.681 { 00:06:25.681 "method": "bdev_nvme_set_hotplug", 00:06:25.681 "params": { 00:06:25.681 "period_us": 100000, 00:06:25.681 "enable": false 00:06:25.681 } 00:06:25.681 }, 00:06:25.681 { 00:06:25.681 "method": "bdev_wait_for_examine" 00:06:25.681 } 00:06:25.681 ] 00:06:25.681 }, 00:06:25.681 { 00:06:25.681 "subsystem": "scsi", 00:06:25.681 "config": null 00:06:25.681 }, 00:06:25.681 { 00:06:25.681 "subsystem": "scheduler", 00:06:25.681 "config": [ 00:06:25.681 { 00:06:25.681 "method": "framework_set_scheduler", 00:06:25.681 "params": { 00:06:25.681 "name": "static" 00:06:25.681 } 00:06:25.681 } 00:06:25.681 ] 00:06:25.681 }, 00:06:25.681 { 00:06:25.681 "subsystem": "vhost_scsi", 00:06:25.681 "config": [] 00:06:25.681 }, 00:06:25.681 { 00:06:25.681 "subsystem": "vhost_blk", 00:06:25.681 "config": [] 00:06:25.681 }, 00:06:25.681 { 00:06:25.681 "subsystem": "ublk", 00:06:25.681 "config": [] 00:06:25.681 }, 00:06:25.681 { 00:06:25.681 "subsystem": "nbd", 00:06:25.681 "config": [] 00:06:25.681 }, 00:06:25.681 { 00:06:25.681 "subsystem": "nvmf", 00:06:25.681 "config": [ 00:06:25.681 { 00:06:25.681 "method": "nvmf_set_config", 00:06:25.681 "params": { 00:06:25.681 "discovery_filter": "match_any", 00:06:25.681 "admin_cmd_passthru": { 00:06:25.681 "identify_ctrlr": false 00:06:25.681 } 00:06:25.681 } 00:06:25.681 }, 00:06:25.681 { 00:06:25.681 "method": "nvmf_set_max_subsystems", 00:06:25.681 "params": { 00:06:25.681 "max_subsystems": 1024 00:06:25.681 } 00:06:25.681 }, 00:06:25.681 { 00:06:25.681 "method": "nvmf_set_crdt", 00:06:25.681 "params": { 00:06:25.681 "crdt1": 0, 00:06:25.681 "crdt2": 0, 00:06:25.681 "crdt3": 0 00:06:25.681 } 00:06:25.681 }, 00:06:25.681 { 00:06:25.681 "method": "nvmf_create_transport", 00:06:25.681 "params": { 00:06:25.681 "trtype": "TCP", 00:06:25.681 "max_queue_depth": 128, 00:06:25.681 "max_io_qpairs_per_ctrlr": 127, 00:06:25.681 "in_capsule_data_size": 4096, 00:06:25.681 "max_io_size": 131072, 00:06:25.681 "io_unit_size": 131072, 00:06:25.681 "max_aq_depth": 128, 00:06:25.681 "num_shared_buffers": 511, 00:06:25.681 "buf_cache_size": 4294967295, 00:06:25.681 "dif_insert_or_strip": false, 00:06:25.681 "zcopy": false, 00:06:25.681 "c2h_success": true, 00:06:25.681 "sock_priority": 0, 00:06:25.681 "abort_timeout_sec": 1, 00:06:25.681 "ack_timeout": 0, 00:06:25.681 "data_wr_pool_size": 0 00:06:25.681 } 00:06:25.681 } 00:06:25.681 ] 00:06:25.681 }, 00:06:25.681 { 00:06:25.681 "subsystem": "iscsi", 00:06:25.681 "config": [ 00:06:25.681 { 00:06:25.681 "method": "iscsi_set_options", 00:06:25.681 "params": { 00:06:25.681 "node_base": "iqn.2016-06.io.spdk", 00:06:25.681 "max_sessions": 128, 00:06:25.681 "max_connections_per_session": 2, 00:06:25.681 "max_queue_depth": 64, 00:06:25.681 "default_time2wait": 2, 00:06:25.681 "default_time2retain": 20, 00:06:25.681 "first_burst_length": 8192, 00:06:25.681 "immediate_data": true, 00:06:25.681 "allow_duplicated_isid": false, 00:06:25.681 "error_recovery_level": 0, 00:06:25.681 "nop_timeout": 60, 00:06:25.681 "nop_in_interval": 30, 00:06:25.681 "disable_chap": false, 00:06:25.681 "require_chap": false, 00:06:25.681 "mutual_chap": false, 00:06:25.681 "chap_group": 0, 00:06:25.681 "max_large_datain_per_connection": 64, 00:06:25.681 "max_r2t_per_connection": 4, 00:06:25.681 "pdu_pool_size": 36864, 00:06:25.681 "immediate_data_pool_size": 16384, 00:06:25.681 "data_out_pool_size": 2048 00:06:25.681 } 00:06:25.681 } 00:06:25.681 ] 00:06:25.681 } 00:06:25.681 ] 00:06:25.681 } 00:06:25.681 15:24:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:25.681 15:24:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1175953 00:06:25.681 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 1175953 ']' 00:06:25.681 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 1175953 00:06:25.681 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:25.681 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:25.681 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1175953 00:06:25.681 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:25.681 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:25.681 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1175953' 00:06:25.681 killing process with pid 1175953 00:06:25.681 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 1175953 00:06:25.681 15:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 1175953 00:06:26.246 15:24:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1176100 00:06:26.246 15:24:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:26.246 15:24:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:31.507 15:24:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1176100 00:06:31.507 15:24:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 1176100 ']' 00:06:31.507 15:24:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 1176100 00:06:31.507 15:24:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:31.507 15:24:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:31.507 15:24:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1176100 00:06:31.507 15:24:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:31.507 15:24:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:31.507 15:24:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1176100' 00:06:31.507 killing process with pid 1176100 00:06:31.507 15:24:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 1176100 00:06:31.507 15:24:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 1176100 00:06:31.765 15:24:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:31.765 15:24:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:31.765 00:06:31.765 real 0m6.508s 00:06:31.765 user 0m6.101s 00:06:31.765 sys 0m0.696s 00:06:31.765 15:24:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.765 15:24:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:31.765 ************************************ 00:06:31.765 END TEST skip_rpc_with_json 00:06:31.765 ************************************ 00:06:31.765 15:24:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:31.765 15:24:44 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:31.765 15:24:44 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.765 15:24:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.765 ************************************ 00:06:31.765 START TEST skip_rpc_with_delay 00:06:31.765 ************************************ 00:06:31.766 15:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:06:31.766 15:24:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:31.766 15:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:31.766 15:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:31.766 15:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:31.766 15:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.766 15:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:31.766 15:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.766 15:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:31.766 15:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.766 15:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:31.766 15:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:31.766 15:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:31.766 [2024-05-15 15:24:44.721770] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:31.766 [2024-05-15 15:24:44.721888] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:31.766 15:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:31.766 15:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:31.766 15:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:31.766 15:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:31.766 00:06:31.766 real 0m0.063s 00:06:31.766 user 0m0.041s 00:06:31.766 sys 0m0.022s 00:06:31.766 15:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.766 15:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:31.766 ************************************ 00:06:31.766 END TEST skip_rpc_with_delay 00:06:31.766 ************************************ 00:06:31.766 15:24:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:31.766 15:24:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:31.766 15:24:44 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:31.766 15:24:44 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:31.766 15:24:44 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.766 15:24:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.766 ************************************ 00:06:31.766 START TEST exit_on_failed_rpc_init 00:06:31.766 ************************************ 00:06:31.766 15:24:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:06:31.766 15:24:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1176812 00:06:31.766 15:24:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.766 15:24:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1176812 00:06:31.766 15:24:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 1176812 ']' 00:06:31.766 15:24:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.766 15:24:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:31.766 15:24:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.766 15:24:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:31.766 15:24:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:31.766 [2024-05-15 15:24:44.837807] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:06:31.766 [2024-05-15 15:24:44.837899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1176812 ] 00:06:32.023 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.023 [2024-05-15 15:24:44.876395] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:32.023 [2024-05-15 15:24:44.907448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.023 [2024-05-15 15:24:44.989841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.281 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:32.281 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:06:32.281 15:24:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:32.281 15:24:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:32.281 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:32.281 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:32.281 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:32.281 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.281 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:32.281 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.281 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:32.281 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.281 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:32.281 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:32.281 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:32.281 [2024-05-15 15:24:45.300730] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:06:32.281 [2024-05-15 15:24:45.300803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1176828 ] 00:06:32.281 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.281 [2024-05-15 15:24:45.338105] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:32.281 [2024-05-15 15:24:45.373012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.538 [2024-05-15 15:24:45.466699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.538 [2024-05-15 15:24:45.466831] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:32.538 [2024-05-15 15:24:45.466851] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:32.538 [2024-05-15 15:24:45.466863] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:32.538 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:32.538 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:32.538 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:32.538 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:32.538 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:32.539 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:32.539 15:24:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:32.539 15:24:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1176812 00:06:32.539 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 1176812 ']' 00:06:32.539 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 1176812 00:06:32.539 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:06:32.539 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:32.539 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1176812 00:06:32.539 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:32.539 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:32.539 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1176812' 00:06:32.539 killing process with pid 1176812 00:06:32.539 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 1176812 00:06:32.539 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 1176812 00:06:33.102 00:06:33.102 real 0m1.214s 00:06:33.102 user 0m1.302s 00:06:33.102 sys 0m0.478s 00:06:33.102 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.102 15:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:33.102 ************************************ 00:06:33.102 END TEST exit_on_failed_rpc_init 00:06:33.102 ************************************ 00:06:33.102 15:24:46 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:33.102 00:06:33.102 real 0m13.509s 00:06:33.102 user 0m12.680s 00:06:33.102 sys 0m1.695s 00:06:33.102 15:24:46 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.102 15:24:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.102 ************************************ 00:06:33.102 END TEST skip_rpc 00:06:33.102 ************************************ 00:06:33.102 15:24:46 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:33.102 15:24:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:33.102 15:24:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.102 15:24:46 -- common/autotest_common.sh@10 -- # set +x 00:06:33.102 ************************************ 00:06:33.102 START TEST rpc_client 00:06:33.102 ************************************ 00:06:33.102 15:24:46 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:33.102 * Looking for test storage... 00:06:33.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:33.102 15:24:46 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:33.102 OK 00:06:33.102 15:24:46 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:33.102 00:06:33.102 real 0m0.068s 00:06:33.102 user 0m0.030s 00:06:33.102 sys 0m0.044s 00:06:33.102 15:24:46 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.102 15:24:46 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:33.102 ************************************ 00:06:33.102 END TEST rpc_client 00:06:33.102 ************************************ 00:06:33.102 15:24:46 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:33.102 15:24:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:33.102 15:24:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.102 15:24:46 -- common/autotest_common.sh@10 -- # set +x 00:06:33.102 ************************************ 00:06:33.102 START TEST json_config 00:06:33.102 ************************************ 00:06:33.102 15:24:46 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:33.360 15:24:46 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:33.360 15:24:46 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:33.360 15:24:46 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:33.361 15:24:46 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.361 15:24:46 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.361 15:24:46 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.361 15:24:46 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.361 15:24:46 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.361 15:24:46 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.361 15:24:46 json_config -- paths/export.sh@5 -- # export PATH 00:06:33.361 15:24:46 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@47 -- # : 0 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:33.361 15:24:46 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:33.361 15:24:46 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:33.361 15:24:46 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:33.361 15:24:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:33.361 15:24:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:33.361 15:24:46 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:33.361 15:24:46 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:33.361 15:24:46 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:33.361 15:24:46 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:33.361 15:24:46 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:33.361 15:24:46 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:33.361 15:24:46 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:33.361 15:24:46 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:33.361 15:24:46 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:33.361 15:24:46 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:33.361 15:24:46 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:33.361 15:24:46 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:33.361 INFO: JSON configuration test init 00:06:33.361 15:24:46 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:33.361 15:24:46 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:33.361 15:24:46 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:33.361 15:24:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.361 15:24:46 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:33.361 15:24:46 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:33.361 15:24:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.361 15:24:46 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:33.361 15:24:46 json_config -- json_config/common.sh@9 -- # local app=target 00:06:33.361 15:24:46 json_config -- json_config/common.sh@10 -- # shift 00:06:33.361 15:24:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:33.361 15:24:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:33.361 15:24:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:33.361 15:24:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:33.361 15:24:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:33.361 15:24:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1177070 00:06:33.361 15:24:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:33.361 15:24:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:33.361 Waiting for target to run... 00:06:33.361 15:24:46 json_config -- json_config/common.sh@25 -- # waitforlisten 1177070 /var/tmp/spdk_tgt.sock 00:06:33.361 15:24:46 json_config -- common/autotest_common.sh@827 -- # '[' -z 1177070 ']' 00:06:33.361 15:24:46 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:33.361 15:24:46 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:33.361 15:24:46 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:33.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:33.361 15:24:46 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:33.361 15:24:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.361 [2024-05-15 15:24:46.302796] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:06:33.362 [2024-05-15 15:24:46.302881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1177070 ] 00:06:33.362 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.928 [2024-05-15 15:24:46.800788] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:33.928 [2024-05-15 15:24:46.839937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.928 [2024-05-15 15:24:46.922311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.185 15:24:47 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:34.185 15:24:47 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:34.185 15:24:47 json_config -- json_config/common.sh@26 -- # echo '' 00:06:34.185 00:06:34.185 15:24:47 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:34.185 15:24:47 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:34.185 15:24:47 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:34.185 15:24:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.442 15:24:47 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:34.442 15:24:47 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:34.442 15:24:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:34.442 15:24:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.442 15:24:47 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:34.442 15:24:47 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:34.442 15:24:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:37.723 15:24:50 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:37.723 15:24:50 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:37.723 15:24:50 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:37.723 15:24:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.723 15:24:50 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:37.723 15:24:50 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:37.723 15:24:50 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:37.723 15:24:50 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:37.723 15:24:50 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:37.723 15:24:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:37.723 15:24:50 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:37.723 15:24:50 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:37.723 15:24:50 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:37.723 15:24:50 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:37.723 15:24:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:37.723 15:24:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.723 15:24:50 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:37.723 15:24:50 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:37.723 15:24:50 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:37.723 15:24:50 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:37.723 15:24:50 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:37.723 15:24:50 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:37.723 15:24:50 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:37.723 15:24:50 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:37.723 15:24:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.723 15:24:50 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:37.723 15:24:50 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:37.723 15:24:50 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:37.723 15:24:50 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:37.723 15:24:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:37.980 MallocForNvmf0 00:06:37.980 15:24:50 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:37.980 15:24:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:38.238 MallocForNvmf1 00:06:38.238 15:24:51 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:38.238 15:24:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:38.495 [2024-05-15 15:24:51.452905] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.495 15:24:51 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:38.495 15:24:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:38.752 15:24:51 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:38.752 15:24:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:39.009 15:24:51 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:39.009 15:24:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:39.267 15:24:52 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:39.267 15:24:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:39.524 [2024-05-15 15:24:52.427625] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:39.524 [2024-05-15 15:24:52.428191] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:39.524 15:24:52 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:39.524 15:24:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:39.524 15:24:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.524 15:24:52 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:39.524 15:24:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:39.524 15:24:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.524 15:24:52 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:39.524 15:24:52 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:39.524 15:24:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:39.781 MallocBdevForConfigChangeCheck 00:06:39.781 15:24:52 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:39.781 15:24:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:39.781 15:24:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.781 15:24:52 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:39.781 15:24:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:40.038 15:24:53 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:40.038 INFO: shutting down applications... 00:06:40.038 15:24:53 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:40.038 15:24:53 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:40.038 15:24:53 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:40.038 15:24:53 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:41.951 Calling clear_iscsi_subsystem 00:06:41.951 Calling clear_nvmf_subsystem 00:06:41.951 Calling clear_nbd_subsystem 00:06:41.951 Calling clear_ublk_subsystem 00:06:41.951 Calling clear_vhost_blk_subsystem 00:06:41.951 Calling clear_vhost_scsi_subsystem 00:06:41.951 Calling clear_bdev_subsystem 00:06:41.951 15:24:54 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:41.951 15:24:54 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:41.951 15:24:54 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:41.951 15:24:54 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:41.951 15:24:54 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:41.951 15:24:54 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:42.210 15:24:55 json_config -- json_config/json_config.sh@345 -- # break 00:06:42.210 15:24:55 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:42.210 15:24:55 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:42.210 15:24:55 json_config -- json_config/common.sh@31 -- # local app=target 00:06:42.210 15:24:55 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:42.210 15:24:55 json_config -- json_config/common.sh@35 -- # [[ -n 1177070 ]] 00:06:42.210 15:24:55 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1177070 00:06:42.210 [2024-05-15 15:24:55.069769] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:42.210 15:24:55 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:42.210 15:24:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:42.210 15:24:55 json_config -- json_config/common.sh@41 -- # kill -0 1177070 00:06:42.210 15:24:55 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:42.778 15:24:55 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:42.778 15:24:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:42.778 15:24:55 json_config -- json_config/common.sh@41 -- # kill -0 1177070 00:06:42.778 15:24:55 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:42.778 15:24:55 json_config -- json_config/common.sh@43 -- # break 00:06:42.778 15:24:55 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:42.778 15:24:55 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:42.778 SPDK target shutdown done 00:06:42.778 15:24:55 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:42.778 INFO: relaunching applications... 00:06:42.778 15:24:55 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:42.778 15:24:55 json_config -- json_config/common.sh@9 -- # local app=target 00:06:42.778 15:24:55 json_config -- json_config/common.sh@10 -- # shift 00:06:42.778 15:24:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:42.778 15:24:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:42.778 15:24:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:42.778 15:24:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:42.778 15:24:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:42.778 15:24:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1178374 00:06:42.778 15:24:55 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:42.778 15:24:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:42.778 Waiting for target to run... 00:06:42.778 15:24:55 json_config -- json_config/common.sh@25 -- # waitforlisten 1178374 /var/tmp/spdk_tgt.sock 00:06:42.778 15:24:55 json_config -- common/autotest_common.sh@827 -- # '[' -z 1178374 ']' 00:06:42.778 15:24:55 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:42.778 15:24:55 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:42.778 15:24:55 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:42.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:42.778 15:24:55 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:42.778 15:24:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:42.778 [2024-05-15 15:24:55.628738] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:06:42.778 [2024-05-15 15:24:55.628838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1178374 ] 00:06:42.778 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.036 [2024-05-15 15:24:56.101926] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:43.294 [2024-05-15 15:24:56.141283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.294 [2024-05-15 15:24:56.223754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.577 [2024-05-15 15:24:59.254280] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.577 [2024-05-15 15:24:59.286232] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:46.577 [2024-05-15 15:24:59.286847] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:47.141 15:25:00 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:47.141 15:25:00 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:47.141 15:25:00 json_config -- json_config/common.sh@26 -- # echo '' 00:06:47.141 00:06:47.141 15:25:00 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:47.142 15:25:00 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:47.142 INFO: Checking if target configuration is the same... 00:06:47.142 15:25:00 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:47.142 15:25:00 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:47.142 15:25:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:47.142 + '[' 2 -ne 2 ']' 00:06:47.142 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:47.142 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:47.142 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:47.142 +++ basename /dev/fd/62 00:06:47.142 ++ mktemp /tmp/62.XXX 00:06:47.142 + tmp_file_1=/tmp/62.VE7 00:06:47.142 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:47.142 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:47.142 + tmp_file_2=/tmp/spdk_tgt_config.json.Yk3 00:06:47.142 + ret=0 00:06:47.142 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:47.398 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:47.398 + diff -u /tmp/62.VE7 /tmp/spdk_tgt_config.json.Yk3 00:06:47.398 + echo 'INFO: JSON config files are the same' 00:06:47.398 INFO: JSON config files are the same 00:06:47.398 + rm /tmp/62.VE7 /tmp/spdk_tgt_config.json.Yk3 00:06:47.398 + exit 0 00:06:47.398 15:25:00 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:47.398 15:25:00 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:47.398 INFO: changing configuration and checking if this can be detected... 00:06:47.398 15:25:00 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:47.398 15:25:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:47.654 15:25:00 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:47.654 15:25:00 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:47.654 15:25:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:47.654 + '[' 2 -ne 2 ']' 00:06:47.654 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:47.654 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:47.654 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:47.654 +++ basename /dev/fd/62 00:06:47.654 ++ mktemp /tmp/62.XXX 00:06:47.654 + tmp_file_1=/tmp/62.7JK 00:06:47.654 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:47.654 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:47.654 + tmp_file_2=/tmp/spdk_tgt_config.json.9G5 00:06:47.654 + ret=0 00:06:47.654 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:48.216 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:48.216 + diff -u /tmp/62.7JK /tmp/spdk_tgt_config.json.9G5 00:06:48.216 + ret=1 00:06:48.216 + echo '=== Start of file: /tmp/62.7JK ===' 00:06:48.216 + cat /tmp/62.7JK 00:06:48.216 + echo '=== End of file: /tmp/62.7JK ===' 00:06:48.216 + echo '' 00:06:48.216 + echo '=== Start of file: /tmp/spdk_tgt_config.json.9G5 ===' 00:06:48.216 + cat /tmp/spdk_tgt_config.json.9G5 00:06:48.216 + echo '=== End of file: /tmp/spdk_tgt_config.json.9G5 ===' 00:06:48.216 + echo '' 00:06:48.216 + rm /tmp/62.7JK /tmp/spdk_tgt_config.json.9G5 00:06:48.216 + exit 1 00:06:48.216 15:25:01 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:48.216 INFO: configuration change detected. 00:06:48.216 15:25:01 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:48.216 15:25:01 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:48.216 15:25:01 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:48.216 15:25:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:48.216 15:25:01 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:48.216 15:25:01 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:48.216 15:25:01 json_config -- json_config/json_config.sh@317 -- # [[ -n 1178374 ]] 00:06:48.216 15:25:01 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:48.216 15:25:01 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:48.216 15:25:01 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:48.216 15:25:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:48.216 15:25:01 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:48.216 15:25:01 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:48.216 15:25:01 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:48.216 15:25:01 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:48.216 15:25:01 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:48.216 15:25:01 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:48.216 15:25:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:48.216 15:25:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:48.216 15:25:01 json_config -- json_config/json_config.sh@323 -- # killprocess 1178374 00:06:48.216 15:25:01 json_config -- common/autotest_common.sh@946 -- # '[' -z 1178374 ']' 00:06:48.216 15:25:01 json_config -- common/autotest_common.sh@950 -- # kill -0 1178374 00:06:48.216 15:25:01 json_config -- common/autotest_common.sh@951 -- # uname 00:06:48.216 15:25:01 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:48.216 15:25:01 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1178374 00:06:48.216 15:25:01 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:48.216 15:25:01 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:48.216 15:25:01 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1178374' 00:06:48.216 killing process with pid 1178374 00:06:48.216 15:25:01 json_config -- common/autotest_common.sh@965 -- # kill 1178374 00:06:48.216 [2024-05-15 15:25:01.199868] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:48.216 15:25:01 json_config -- common/autotest_common.sh@970 -- # wait 1178374 00:06:50.114 15:25:02 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:50.114 15:25:02 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:50.114 15:25:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:50.114 15:25:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:50.114 15:25:02 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:50.114 15:25:02 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:50.114 INFO: Success 00:06:50.114 00:06:50.114 real 0m16.539s 00:06:50.114 user 0m18.290s 00:06:50.114 sys 0m2.236s 00:06:50.114 15:25:02 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.114 15:25:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:50.114 ************************************ 00:06:50.114 END TEST json_config 00:06:50.114 ************************************ 00:06:50.114 15:25:02 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:50.114 15:25:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:50.114 15:25:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.114 15:25:02 -- common/autotest_common.sh@10 -- # set +x 00:06:50.114 ************************************ 00:06:50.114 START TEST json_config_extra_key 00:06:50.114 ************************************ 00:06:50.114 15:25:02 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:50.114 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:50.114 15:25:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:50.114 15:25:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.114 15:25:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.114 15:25:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.114 15:25:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.114 15:25:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.114 15:25:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.114 15:25:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.114 15:25:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.114 15:25:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.114 15:25:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.114 15:25:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:50.114 15:25:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:50.114 15:25:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.114 15:25:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.114 15:25:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:50.114 15:25:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:50.114 15:25:02 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:50.114 15:25:02 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.114 15:25:02 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.114 15:25:02 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.114 15:25:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.114 15:25:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.115 15:25:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.115 15:25:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:50.115 15:25:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.115 15:25:02 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:50.115 15:25:02 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:50.115 15:25:02 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:50.115 15:25:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:50.115 15:25:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.115 15:25:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.115 15:25:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:50.115 15:25:02 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:50.115 15:25:02 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:50.115 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:50.115 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:50.115 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:50.115 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:50.115 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:50.115 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:50.115 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:50.115 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:50.115 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:50.115 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:50.115 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:50.115 INFO: launching applications... 00:06:50.115 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:50.115 15:25:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:50.115 15:25:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:50.115 15:25:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:50.115 15:25:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:50.115 15:25:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:50.115 15:25:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:50.115 15:25:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:50.115 15:25:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1179303 00:06:50.115 15:25:02 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:50.115 15:25:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:50.115 Waiting for target to run... 00:06:50.115 15:25:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1179303 /var/tmp/spdk_tgt.sock 00:06:50.115 15:25:02 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 1179303 ']' 00:06:50.115 15:25:02 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:50.115 15:25:02 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:50.115 15:25:02 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:50.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:50.115 15:25:02 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:50.115 15:25:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:50.115 [2024-05-15 15:25:02.885852] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:06:50.115 [2024-05-15 15:25:02.885947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1179303 ] 00:06:50.115 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.115 [2024-05-15 15:25:03.202055] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:50.372 [2024-05-15 15:25:03.241286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.372 [2024-05-15 15:25:03.300184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.936 15:25:03 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:50.936 15:25:03 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:50.936 15:25:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:50.936 00:06:50.936 15:25:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:50.936 INFO: shutting down applications... 00:06:50.936 15:25:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:50.936 15:25:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:50.936 15:25:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:50.936 15:25:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1179303 ]] 00:06:50.936 15:25:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1179303 00:06:50.936 15:25:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:50.936 15:25:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:50.936 15:25:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1179303 00:06:50.936 15:25:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:51.503 15:25:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:51.503 15:25:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:51.503 15:25:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1179303 00:06:51.503 15:25:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:51.503 15:25:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:51.503 15:25:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:51.503 15:25:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:51.503 SPDK target shutdown done 00:06:51.503 15:25:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:51.503 Success 00:06:51.503 00:06:51.503 real 0m1.533s 00:06:51.503 user 0m1.497s 00:06:51.503 sys 0m0.435s 00:06:51.503 15:25:04 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.503 15:25:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:51.503 ************************************ 00:06:51.503 END TEST json_config_extra_key 00:06:51.503 ************************************ 00:06:51.503 15:25:04 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:51.503 15:25:04 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:51.503 15:25:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.503 15:25:04 -- common/autotest_common.sh@10 -- # set +x 00:06:51.503 ************************************ 00:06:51.503 START TEST alias_rpc 00:06:51.503 ************************************ 00:06:51.503 15:25:04 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:51.503 * Looking for test storage... 00:06:51.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:51.503 15:25:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:51.503 15:25:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1179490 00:06:51.503 15:25:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:51.503 15:25:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1179490 00:06:51.503 15:25:04 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 1179490 ']' 00:06:51.503 15:25:04 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.503 15:25:04 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:51.503 15:25:04 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.503 15:25:04 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:51.503 15:25:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.503 [2024-05-15 15:25:04.484960] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:06:51.503 [2024-05-15 15:25:04.485043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1179490 ] 00:06:51.503 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.503 [2024-05-15 15:25:04.523094] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:51.503 [2024-05-15 15:25:04.561009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.761 [2024-05-15 15:25:04.650922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.018 15:25:04 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:52.018 15:25:04 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:52.018 15:25:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:52.275 15:25:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1179490 00:06:52.275 15:25:05 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 1179490 ']' 00:06:52.275 15:25:05 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 1179490 00:06:52.275 15:25:05 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:52.275 15:25:05 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:52.275 15:25:05 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1179490 00:06:52.275 15:25:05 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:52.275 15:25:05 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:52.275 15:25:05 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1179490' 00:06:52.275 killing process with pid 1179490 00:06:52.275 15:25:05 alias_rpc -- common/autotest_common.sh@965 -- # kill 1179490 00:06:52.275 15:25:05 alias_rpc -- common/autotest_common.sh@970 -- # wait 1179490 00:06:52.532 00:06:52.533 real 0m1.240s 00:06:52.533 user 0m1.275s 00:06:52.533 sys 0m0.455s 00:06:52.533 15:25:05 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.533 15:25:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.533 ************************************ 00:06:52.533 END TEST alias_rpc 00:06:52.533 ************************************ 00:06:52.790 15:25:05 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:06:52.790 15:25:05 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:52.790 15:25:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:52.790 15:25:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.790 15:25:05 -- common/autotest_common.sh@10 -- # set +x 00:06:52.790 ************************************ 00:06:52.790 START TEST spdkcli_tcp 00:06:52.790 ************************************ 00:06:52.790 15:25:05 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:52.790 * Looking for test storage... 00:06:52.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:52.790 15:25:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:52.790 15:25:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:52.790 15:25:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:52.790 15:25:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:52.790 15:25:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:52.790 15:25:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:52.790 15:25:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:52.790 15:25:05 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:52.790 15:25:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:52.790 15:25:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1179793 00:06:52.790 15:25:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:52.790 15:25:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1179793 00:06:52.790 15:25:05 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 1179793 ']' 00:06:52.790 15:25:05 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.790 15:25:05 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:52.790 15:25:05 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.790 15:25:05 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:52.790 15:25:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:52.790 [2024-05-15 15:25:05.781851] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:06:52.791 [2024-05-15 15:25:05.781947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1179793 ] 00:06:52.791 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.791 [2024-05-15 15:25:05.817470] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:52.791 [2024-05-15 15:25:05.848545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.048 [2024-05-15 15:25:05.931336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.048 [2024-05-15 15:25:05.931340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.306 15:25:06 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:53.306 15:25:06 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:53.306 15:25:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1179807 00:06:53.306 15:25:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:53.306 15:25:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:53.564 [ 00:06:53.564 "bdev_malloc_delete", 00:06:53.564 "bdev_malloc_create", 00:06:53.564 "bdev_null_resize", 00:06:53.564 "bdev_null_delete", 00:06:53.564 "bdev_null_create", 00:06:53.564 "bdev_nvme_cuse_unregister", 00:06:53.564 "bdev_nvme_cuse_register", 00:06:53.564 "bdev_opal_new_user", 00:06:53.564 "bdev_opal_set_lock_state", 00:06:53.564 "bdev_opal_delete", 00:06:53.564 "bdev_opal_get_info", 00:06:53.564 "bdev_opal_create", 00:06:53.564 "bdev_nvme_opal_revert", 00:06:53.564 "bdev_nvme_opal_init", 00:06:53.564 "bdev_nvme_send_cmd", 00:06:53.564 "bdev_nvme_get_path_iostat", 00:06:53.564 "bdev_nvme_get_mdns_discovery_info", 00:06:53.564 "bdev_nvme_stop_mdns_discovery", 00:06:53.564 "bdev_nvme_start_mdns_discovery", 00:06:53.564 "bdev_nvme_set_multipath_policy", 00:06:53.564 "bdev_nvme_set_preferred_path", 00:06:53.564 "bdev_nvme_get_io_paths", 00:06:53.564 "bdev_nvme_remove_error_injection", 00:06:53.564 "bdev_nvme_add_error_injection", 00:06:53.564 "bdev_nvme_get_discovery_info", 00:06:53.564 "bdev_nvme_stop_discovery", 00:06:53.564 "bdev_nvme_start_discovery", 00:06:53.564 "bdev_nvme_get_controller_health_info", 00:06:53.564 "bdev_nvme_disable_controller", 00:06:53.564 "bdev_nvme_enable_controller", 00:06:53.564 "bdev_nvme_reset_controller", 00:06:53.564 "bdev_nvme_get_transport_statistics", 00:06:53.564 "bdev_nvme_apply_firmware", 00:06:53.564 "bdev_nvme_detach_controller", 00:06:53.564 "bdev_nvme_get_controllers", 00:06:53.564 "bdev_nvme_attach_controller", 00:06:53.564 "bdev_nvme_set_hotplug", 00:06:53.564 "bdev_nvme_set_options", 00:06:53.564 "bdev_passthru_delete", 00:06:53.564 "bdev_passthru_create", 00:06:53.564 "bdev_lvol_check_shallow_copy", 00:06:53.564 "bdev_lvol_start_shallow_copy", 00:06:53.564 "bdev_lvol_grow_lvstore", 00:06:53.564 "bdev_lvol_get_lvols", 00:06:53.564 "bdev_lvol_get_lvstores", 00:06:53.564 "bdev_lvol_delete", 00:06:53.564 "bdev_lvol_set_read_only", 00:06:53.564 "bdev_lvol_resize", 00:06:53.564 "bdev_lvol_decouple_parent", 00:06:53.564 "bdev_lvol_inflate", 00:06:53.564 "bdev_lvol_rename", 00:06:53.564 "bdev_lvol_clone_bdev", 00:06:53.564 "bdev_lvol_clone", 00:06:53.564 "bdev_lvol_snapshot", 00:06:53.564 "bdev_lvol_create", 00:06:53.564 "bdev_lvol_delete_lvstore", 00:06:53.564 "bdev_lvol_rename_lvstore", 00:06:53.564 "bdev_lvol_create_lvstore", 00:06:53.564 "bdev_raid_set_options", 00:06:53.564 "bdev_raid_remove_base_bdev", 00:06:53.564 "bdev_raid_add_base_bdev", 00:06:53.564 "bdev_raid_delete", 00:06:53.564 "bdev_raid_create", 00:06:53.564 "bdev_raid_get_bdevs", 00:06:53.564 "bdev_error_inject_error", 00:06:53.564 "bdev_error_delete", 00:06:53.564 "bdev_error_create", 00:06:53.564 "bdev_split_delete", 00:06:53.564 "bdev_split_create", 00:06:53.564 "bdev_delay_delete", 00:06:53.564 "bdev_delay_create", 00:06:53.564 "bdev_delay_update_latency", 00:06:53.564 "bdev_zone_block_delete", 00:06:53.564 "bdev_zone_block_create", 00:06:53.564 "blobfs_create", 00:06:53.564 "blobfs_detect", 00:06:53.564 "blobfs_set_cache_size", 00:06:53.564 "bdev_aio_delete", 00:06:53.564 "bdev_aio_rescan", 00:06:53.564 "bdev_aio_create", 00:06:53.564 "bdev_ftl_set_property", 00:06:53.564 "bdev_ftl_get_properties", 00:06:53.564 "bdev_ftl_get_stats", 00:06:53.564 "bdev_ftl_unmap", 00:06:53.564 "bdev_ftl_unload", 00:06:53.564 "bdev_ftl_delete", 00:06:53.564 "bdev_ftl_load", 00:06:53.564 "bdev_ftl_create", 00:06:53.564 "bdev_virtio_attach_controller", 00:06:53.564 "bdev_virtio_scsi_get_devices", 00:06:53.564 "bdev_virtio_detach_controller", 00:06:53.564 "bdev_virtio_blk_set_hotplug", 00:06:53.564 "bdev_iscsi_delete", 00:06:53.564 "bdev_iscsi_create", 00:06:53.564 "bdev_iscsi_set_options", 00:06:53.564 "accel_error_inject_error", 00:06:53.564 "ioat_scan_accel_module", 00:06:53.564 "dsa_scan_accel_module", 00:06:53.564 "iaa_scan_accel_module", 00:06:53.564 "vfu_virtio_create_scsi_endpoint", 00:06:53.564 "vfu_virtio_scsi_remove_target", 00:06:53.564 "vfu_virtio_scsi_add_target", 00:06:53.564 "vfu_virtio_create_blk_endpoint", 00:06:53.564 "vfu_virtio_delete_endpoint", 00:06:53.564 "keyring_file_remove_key", 00:06:53.564 "keyring_file_add_key", 00:06:53.564 "iscsi_get_histogram", 00:06:53.564 "iscsi_enable_histogram", 00:06:53.564 "iscsi_set_options", 00:06:53.564 "iscsi_get_auth_groups", 00:06:53.564 "iscsi_auth_group_remove_secret", 00:06:53.564 "iscsi_auth_group_add_secret", 00:06:53.564 "iscsi_delete_auth_group", 00:06:53.564 "iscsi_create_auth_group", 00:06:53.564 "iscsi_set_discovery_auth", 00:06:53.564 "iscsi_get_options", 00:06:53.564 "iscsi_target_node_request_logout", 00:06:53.564 "iscsi_target_node_set_redirect", 00:06:53.564 "iscsi_target_node_set_auth", 00:06:53.564 "iscsi_target_node_add_lun", 00:06:53.564 "iscsi_get_stats", 00:06:53.564 "iscsi_get_connections", 00:06:53.564 "iscsi_portal_group_set_auth", 00:06:53.564 "iscsi_start_portal_group", 00:06:53.564 "iscsi_delete_portal_group", 00:06:53.564 "iscsi_create_portal_group", 00:06:53.564 "iscsi_get_portal_groups", 00:06:53.564 "iscsi_delete_target_node", 00:06:53.564 "iscsi_target_node_remove_pg_ig_maps", 00:06:53.564 "iscsi_target_node_add_pg_ig_maps", 00:06:53.564 "iscsi_create_target_node", 00:06:53.564 "iscsi_get_target_nodes", 00:06:53.564 "iscsi_delete_initiator_group", 00:06:53.564 "iscsi_initiator_group_remove_initiators", 00:06:53.564 "iscsi_initiator_group_add_initiators", 00:06:53.564 "iscsi_create_initiator_group", 00:06:53.564 "iscsi_get_initiator_groups", 00:06:53.564 "nvmf_set_crdt", 00:06:53.565 "nvmf_set_config", 00:06:53.565 "nvmf_set_max_subsystems", 00:06:53.565 "nvmf_stop_mdns_prr", 00:06:53.565 "nvmf_publish_mdns_prr", 00:06:53.565 "nvmf_subsystem_get_listeners", 00:06:53.565 "nvmf_subsystem_get_qpairs", 00:06:53.565 "nvmf_subsystem_get_controllers", 00:06:53.565 "nvmf_get_stats", 00:06:53.565 "nvmf_get_transports", 00:06:53.565 "nvmf_create_transport", 00:06:53.565 "nvmf_get_targets", 00:06:53.565 "nvmf_delete_target", 00:06:53.565 "nvmf_create_target", 00:06:53.565 "nvmf_subsystem_allow_any_host", 00:06:53.565 "nvmf_subsystem_remove_host", 00:06:53.565 "nvmf_subsystem_add_host", 00:06:53.565 "nvmf_ns_remove_host", 00:06:53.565 "nvmf_ns_add_host", 00:06:53.565 "nvmf_subsystem_remove_ns", 00:06:53.565 "nvmf_subsystem_add_ns", 00:06:53.565 "nvmf_subsystem_listener_set_ana_state", 00:06:53.565 "nvmf_discovery_get_referrals", 00:06:53.565 "nvmf_discovery_remove_referral", 00:06:53.565 "nvmf_discovery_add_referral", 00:06:53.565 "nvmf_subsystem_remove_listener", 00:06:53.565 "nvmf_subsystem_add_listener", 00:06:53.565 "nvmf_delete_subsystem", 00:06:53.565 "nvmf_create_subsystem", 00:06:53.565 "nvmf_get_subsystems", 00:06:53.565 "env_dpdk_get_mem_stats", 00:06:53.565 "nbd_get_disks", 00:06:53.565 "nbd_stop_disk", 00:06:53.565 "nbd_start_disk", 00:06:53.565 "ublk_recover_disk", 00:06:53.565 "ublk_get_disks", 00:06:53.565 "ublk_stop_disk", 00:06:53.565 "ublk_start_disk", 00:06:53.565 "ublk_destroy_target", 00:06:53.565 "ublk_create_target", 00:06:53.565 "virtio_blk_create_transport", 00:06:53.565 "virtio_blk_get_transports", 00:06:53.565 "vhost_controller_set_coalescing", 00:06:53.565 "vhost_get_controllers", 00:06:53.565 "vhost_delete_controller", 00:06:53.565 "vhost_create_blk_controller", 00:06:53.565 "vhost_scsi_controller_remove_target", 00:06:53.565 "vhost_scsi_controller_add_target", 00:06:53.565 "vhost_start_scsi_controller", 00:06:53.565 "vhost_create_scsi_controller", 00:06:53.565 "thread_set_cpumask", 00:06:53.565 "framework_get_scheduler", 00:06:53.565 "framework_set_scheduler", 00:06:53.565 "framework_get_reactors", 00:06:53.565 "thread_get_io_channels", 00:06:53.565 "thread_get_pollers", 00:06:53.565 "thread_get_stats", 00:06:53.565 "framework_monitor_context_switch", 00:06:53.565 "spdk_kill_instance", 00:06:53.565 "log_enable_timestamps", 00:06:53.565 "log_get_flags", 00:06:53.565 "log_clear_flag", 00:06:53.565 "log_set_flag", 00:06:53.565 "log_get_level", 00:06:53.565 "log_set_level", 00:06:53.565 "log_get_print_level", 00:06:53.565 "log_set_print_level", 00:06:53.565 "framework_enable_cpumask_locks", 00:06:53.565 "framework_disable_cpumask_locks", 00:06:53.565 "framework_wait_init", 00:06:53.565 "framework_start_init", 00:06:53.565 "scsi_get_devices", 00:06:53.565 "bdev_get_histogram", 00:06:53.565 "bdev_enable_histogram", 00:06:53.565 "bdev_set_qos_limit", 00:06:53.565 "bdev_set_qd_sampling_period", 00:06:53.565 "bdev_get_bdevs", 00:06:53.565 "bdev_reset_iostat", 00:06:53.565 "bdev_get_iostat", 00:06:53.565 "bdev_examine", 00:06:53.565 "bdev_wait_for_examine", 00:06:53.565 "bdev_set_options", 00:06:53.565 "notify_get_notifications", 00:06:53.565 "notify_get_types", 00:06:53.565 "accel_get_stats", 00:06:53.565 "accel_set_options", 00:06:53.565 "accel_set_driver", 00:06:53.565 "accel_crypto_key_destroy", 00:06:53.565 "accel_crypto_keys_get", 00:06:53.565 "accel_crypto_key_create", 00:06:53.565 "accel_assign_opc", 00:06:53.565 "accel_get_module_info", 00:06:53.565 "accel_get_opc_assignments", 00:06:53.565 "vmd_rescan", 00:06:53.565 "vmd_remove_device", 00:06:53.565 "vmd_enable", 00:06:53.565 "sock_get_default_impl", 00:06:53.565 "sock_set_default_impl", 00:06:53.565 "sock_impl_set_options", 00:06:53.565 "sock_impl_get_options", 00:06:53.565 "iobuf_get_stats", 00:06:53.565 "iobuf_set_options", 00:06:53.565 "keyring_get_keys", 00:06:53.565 "framework_get_pci_devices", 00:06:53.565 "framework_get_config", 00:06:53.565 "framework_get_subsystems", 00:06:53.565 "vfu_tgt_set_base_path", 00:06:53.565 "trace_get_info", 00:06:53.565 "trace_get_tpoint_group_mask", 00:06:53.565 "trace_disable_tpoint_group", 00:06:53.565 "trace_enable_tpoint_group", 00:06:53.565 "trace_clear_tpoint_mask", 00:06:53.565 "trace_set_tpoint_mask", 00:06:53.565 "spdk_get_version", 00:06:53.565 "rpc_get_methods" 00:06:53.565 ] 00:06:53.565 15:25:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:53.565 15:25:06 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:53.565 15:25:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:53.565 15:25:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:53.565 15:25:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1179793 00:06:53.565 15:25:06 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 1179793 ']' 00:06:53.565 15:25:06 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 1179793 00:06:53.565 15:25:06 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:53.565 15:25:06 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:53.565 15:25:06 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1179793 00:06:53.565 15:25:06 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:53.565 15:25:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:53.565 15:25:06 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1179793' 00:06:53.565 killing process with pid 1179793 00:06:53.565 15:25:06 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 1179793 00:06:53.565 15:25:06 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 1179793 00:06:53.824 00:06:53.824 real 0m1.217s 00:06:53.824 user 0m2.146s 00:06:53.824 sys 0m0.439s 00:06:53.824 15:25:06 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.824 15:25:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:53.824 ************************************ 00:06:53.824 END TEST spdkcli_tcp 00:06:53.824 ************************************ 00:06:53.824 15:25:06 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:53.824 15:25:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:53.824 15:25:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.824 15:25:06 -- common/autotest_common.sh@10 -- # set +x 00:06:54.082 ************************************ 00:06:54.082 START TEST dpdk_mem_utility 00:06:54.082 ************************************ 00:06:54.082 15:25:06 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:54.082 * Looking for test storage... 00:06:54.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:54.082 15:25:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:54.082 15:25:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1179993 00:06:54.082 15:25:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:54.082 15:25:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1179993 00:06:54.082 15:25:06 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 1179993 ']' 00:06:54.082 15:25:06 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.082 15:25:06 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:54.082 15:25:06 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.082 15:25:06 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:54.082 15:25:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:54.082 [2024-05-15 15:25:07.046453] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:06:54.082 [2024-05-15 15:25:07.046549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1179993 ] 00:06:54.082 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.082 [2024-05-15 15:25:07.082610] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:54.082 [2024-05-15 15:25:07.116297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.339 [2024-05-15 15:25:07.201311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.598 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:54.598 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:54.598 15:25:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:54.598 15:25:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:54.598 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.598 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:54.598 { 00:06:54.598 "filename": "/tmp/spdk_mem_dump.txt" 00:06:54.598 } 00:06:54.598 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.598 15:25:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:54.598 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:54.598 1 heaps totaling size 814.000000 MiB 00:06:54.598 size: 814.000000 MiB heap id: 0 00:06:54.598 end heaps---------- 00:06:54.598 8 mempools totaling size 598.116089 MiB 00:06:54.598 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:54.598 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:54.598 size: 84.521057 MiB name: bdev_io_1179993 00:06:54.598 size: 51.011292 MiB name: evtpool_1179993 00:06:54.598 size: 50.003479 MiB name: msgpool_1179993 00:06:54.598 size: 21.763794 MiB name: PDU_Pool 00:06:54.598 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:54.598 size: 0.026123 MiB name: Session_Pool 00:06:54.598 end mempools------- 00:06:54.598 6 memzones totaling size 4.142822 MiB 00:06:54.598 size: 1.000366 MiB name: RG_ring_0_1179993 00:06:54.598 size: 1.000366 MiB name: RG_ring_1_1179993 00:06:54.598 size: 1.000366 MiB name: RG_ring_4_1179993 00:06:54.598 size: 1.000366 MiB name: RG_ring_5_1179993 00:06:54.598 size: 0.125366 MiB name: RG_ring_2_1179993 00:06:54.598 size: 0.015991 MiB name: RG_ring_3_1179993 00:06:54.598 end memzones------- 00:06:54.598 15:25:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:54.598 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:54.598 list of free elements. size: 12.519348 MiB 00:06:54.598 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:54.598 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:54.598 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:54.598 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:54.598 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:54.598 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:54.598 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:54.598 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:54.598 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:54.598 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:54.598 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:54.598 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:54.598 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:54.598 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:54.598 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:54.598 list of standard malloc elements. size: 199.218079 MiB 00:06:54.598 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:54.598 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:54.598 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:54.598 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:54.598 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:54.598 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:54.598 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:54.598 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:54.598 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:54.598 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:54.598 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:54.598 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:54.598 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:54.598 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:54.598 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:54.598 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:54.598 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:54.598 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:54.598 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:54.598 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:54.598 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:54.598 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:54.598 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:54.598 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:54.598 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:54.598 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:54.598 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:54.598 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:54.598 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:54.598 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:54.598 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:54.598 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:54.598 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:54.598 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:54.598 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:54.598 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:54.598 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:54.598 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:54.598 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:54.598 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:54.598 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:54.598 list of memzone associated elements. size: 602.262573 MiB 00:06:54.598 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:54.598 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:54.599 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:54.599 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:54.599 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:54.599 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1179993_0 00:06:54.599 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:54.599 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1179993_0 00:06:54.599 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:54.599 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1179993_0 00:06:54.599 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:54.599 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:54.599 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:54.599 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:54.599 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:54.599 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1179993 00:06:54.599 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:54.599 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1179993 00:06:54.599 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:54.599 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1179993 00:06:54.599 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:54.599 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:54.599 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:54.599 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:54.599 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:54.599 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:54.599 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:54.599 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:54.599 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:54.599 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1179993 00:06:54.599 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:54.599 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1179993 00:06:54.599 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:54.599 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1179993 00:06:54.599 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:54.599 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1179993 00:06:54.599 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:54.599 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1179993 00:06:54.599 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:54.599 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:54.599 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:54.599 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:54.599 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:54.599 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:54.599 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:54.599 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1179993 00:06:54.599 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:54.599 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:54.599 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:54.599 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:54.599 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:54.599 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1179993 00:06:54.599 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:54.599 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:54.599 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:54.599 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1179993 00:06:54.599 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:54.599 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1179993 00:06:54.599 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:54.599 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:54.599 15:25:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:54.599 15:25:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1179993 00:06:54.599 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 1179993 ']' 00:06:54.599 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 1179993 00:06:54.599 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:54.599 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:54.599 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1179993 00:06:54.599 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:54.599 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:54.599 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1179993' 00:06:54.599 killing process with pid 1179993 00:06:54.599 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 1179993 00:06:54.599 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 1179993 00:06:55.164 00:06:55.164 real 0m1.057s 00:06:55.164 user 0m1.016s 00:06:55.164 sys 0m0.406s 00:06:55.164 15:25:08 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.164 15:25:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:55.164 ************************************ 00:06:55.164 END TEST dpdk_mem_utility 00:06:55.164 ************************************ 00:06:55.164 15:25:08 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:55.164 15:25:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:55.164 15:25:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.164 15:25:08 -- common/autotest_common.sh@10 -- # set +x 00:06:55.164 ************************************ 00:06:55.164 START TEST event 00:06:55.164 ************************************ 00:06:55.164 15:25:08 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:55.164 * Looking for test storage... 00:06:55.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:55.164 15:25:08 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:55.164 15:25:08 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:55.164 15:25:08 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:55.164 15:25:08 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:55.164 15:25:08 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.164 15:25:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:55.164 ************************************ 00:06:55.164 START TEST event_perf 00:06:55.164 ************************************ 00:06:55.164 15:25:08 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:55.164 Running I/O for 1 seconds...[2024-05-15 15:25:08.154336] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:06:55.164 [2024-05-15 15:25:08.154400] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1180181 ] 00:06:55.164 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.164 [2024-05-15 15:25:08.191687] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:55.164 [2024-05-15 15:25:08.224518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:55.422 [2024-05-15 15:25:08.311210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.422 [2024-05-15 15:25:08.311328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.422 [2024-05-15 15:25:08.311354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.422 [2024-05-15 15:25:08.311357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.354 Running I/O for 1 seconds... 00:06:56.354 lcore 0: 234706 00:06:56.354 lcore 1: 234704 00:06:56.354 lcore 2: 234704 00:06:56.354 lcore 3: 234704 00:06:56.354 done. 00:06:56.354 00:06:56.354 real 0m1.251s 00:06:56.354 user 0m4.152s 00:06:56.354 sys 0m0.094s 00:06:56.354 15:25:09 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.354 15:25:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.354 ************************************ 00:06:56.354 END TEST event_perf 00:06:56.354 ************************************ 00:06:56.354 15:25:09 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:56.354 15:25:09 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:56.354 15:25:09 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.354 15:25:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:56.354 ************************************ 00:06:56.354 START TEST event_reactor 00:06:56.354 ************************************ 00:06:56.354 15:25:09 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:56.354 [2024-05-15 15:25:09.454866] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:06:56.354 [2024-05-15 15:25:09.454932] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1180344 ] 00:06:56.612 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.612 [2024-05-15 15:25:09.497387] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:56.612 [2024-05-15 15:25:09.533688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.612 [2024-05-15 15:25:09.623581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.021 test_start 00:06:58.021 oneshot 00:06:58.021 tick 100 00:06:58.021 tick 100 00:06:58.021 tick 250 00:06:58.021 tick 100 00:06:58.021 tick 100 00:06:58.021 tick 100 00:06:58.021 tick 250 00:06:58.021 tick 500 00:06:58.021 tick 100 00:06:58.021 tick 100 00:06:58.021 tick 250 00:06:58.021 tick 100 00:06:58.021 tick 100 00:06:58.021 test_end 00:06:58.021 00:06:58.021 real 0m1.260s 00:06:58.021 user 0m1.152s 00:06:58.021 sys 0m0.103s 00:06:58.021 15:25:10 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.021 15:25:10 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:58.021 ************************************ 00:06:58.021 END TEST event_reactor 00:06:58.021 ************************************ 00:06:58.021 15:25:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:58.021 15:25:10 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:58.021 15:25:10 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.021 15:25:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:58.021 ************************************ 00:06:58.021 START TEST event_reactor_perf 00:06:58.021 ************************************ 00:06:58.021 15:25:10 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:58.021 [2024-05-15 15:25:10.771591] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:06:58.021 [2024-05-15 15:25:10.771654] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1180508 ] 00:06:58.021 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.021 [2024-05-15 15:25:10.808016] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:58.021 [2024-05-15 15:25:10.845155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.021 [2024-05-15 15:25:10.933032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.955 test_start 00:06:58.955 test_end 00:06:58.955 Performance: 349938 events per second 00:06:58.955 00:06:58.955 real 0m1.259s 00:06:58.955 user 0m1.162s 00:06:58.955 sys 0m0.093s 00:06:58.955 15:25:12 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.955 15:25:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:58.955 ************************************ 00:06:58.955 END TEST event_reactor_perf 00:06:58.955 ************************************ 00:06:58.956 15:25:12 event -- event/event.sh@49 -- # uname -s 00:06:58.956 15:25:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:58.956 15:25:12 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:58.956 15:25:12 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:58.956 15:25:12 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.956 15:25:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:59.214 ************************************ 00:06:59.214 START TEST event_scheduler 00:06:59.214 ************************************ 00:06:59.214 15:25:12 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:59.214 * Looking for test storage... 00:06:59.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:59.214 15:25:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:59.214 15:25:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1180688 00:06:59.214 15:25:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:59.214 15:25:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:59.214 15:25:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1180688 00:06:59.214 15:25:12 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 1180688 ']' 00:06:59.214 15:25:12 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.214 15:25:12 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:59.214 15:25:12 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.214 15:25:12 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:59.214 15:25:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:59.214 [2024-05-15 15:25:12.169459] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:06:59.214 [2024-05-15 15:25:12.169547] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1180688 ] 00:06:59.214 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.214 [2024-05-15 15:25:12.205403] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:59.214 [2024-05-15 15:25:12.236026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:59.472 [2024-05-15 15:25:12.321479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.472 [2024-05-15 15:25:12.321538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.472 [2024-05-15 15:25:12.321603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:59.472 [2024-05-15 15:25:12.321606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.472 15:25:12 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:59.472 15:25:12 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:59.472 15:25:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:59.472 15:25:12 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.472 15:25:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:59.472 POWER: Env isn't set yet! 00:06:59.472 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:59.472 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:59.472 POWER: Cannot get available frequencies of lcore 0 00:06:59.472 POWER: Attempting to initialise PSTAT power management... 00:06:59.472 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:59.472 POWER: Initialized successfully for lcore 0 power management 00:06:59.472 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:59.472 POWER: Initialized successfully for lcore 1 power management 00:06:59.472 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:59.472 POWER: Initialized successfully for lcore 2 power management 00:06:59.472 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:59.472 POWER: Initialized successfully for lcore 3 power management 00:06:59.472 15:25:12 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.472 15:25:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:59.472 15:25:12 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.472 15:25:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:59.472 [2024-05-15 15:25:12.533919] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:59.472 15:25:12 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.472 15:25:12 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:59.472 15:25:12 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:59.472 15:25:12 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.472 15:25:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:59.472 ************************************ 00:06:59.472 START TEST scheduler_create_thread 00:06:59.472 ************************************ 00:06:59.472 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:59.472 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:59.472 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.472 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.730 2 00:06:59.730 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.730 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:59.730 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.730 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.730 3 00:06:59.730 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.730 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:59.730 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.730 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.730 4 00:06:59.730 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.730 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:59.730 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.730 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.730 5 00:06:59.730 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.731 6 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.731 7 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.731 8 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.731 9 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.731 10 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.731 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.296 15:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.296 15:25:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:00.296 15:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.296 15:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.667 15:25:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.667 15:25:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:01.667 15:25:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:01.667 15:25:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.667 15:25:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.601 15:25:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.601 00:07:02.601 real 0m3.050s 00:07:02.601 user 0m0.014s 00:07:02.601 sys 0m0.002s 00:07:02.601 15:25:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.601 15:25:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.601 ************************************ 00:07:02.601 END TEST scheduler_create_thread 00:07:02.601 ************************************ 00:07:02.601 15:25:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:02.601 15:25:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1180688 00:07:02.601 15:25:15 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 1180688 ']' 00:07:02.601 15:25:15 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 1180688 00:07:02.601 15:25:15 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:07:02.601 15:25:15 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:02.601 15:25:15 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1180688 00:07:02.601 15:25:15 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:07:02.601 15:25:15 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:07:02.601 15:25:15 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1180688' 00:07:02.601 killing process with pid 1180688 00:07:02.601 15:25:15 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 1180688 00:07:02.601 15:25:15 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 1180688 00:07:03.168 [2024-05-15 15:25:15.995929] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:03.168 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:07:03.168 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:07:03.168 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:07:03.168 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:07:03.168 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:07:03.168 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:07:03.168 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:07:03.168 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:07:03.168 00:07:03.168 real 0m4.165s 00:07:03.168 user 0m6.845s 00:07:03.168 sys 0m0.344s 00:07:03.168 15:25:16 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.168 15:25:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:03.168 ************************************ 00:07:03.168 END TEST event_scheduler 00:07:03.168 ************************************ 00:07:03.168 15:25:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:03.427 15:25:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:03.427 15:25:16 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:03.427 15:25:16 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.427 15:25:16 event -- common/autotest_common.sh@10 -- # set +x 00:07:03.427 ************************************ 00:07:03.427 START TEST app_repeat 00:07:03.427 ************************************ 00:07:03.427 15:25:16 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:07:03.427 15:25:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.427 15:25:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.427 15:25:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:03.427 15:25:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.427 15:25:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:03.427 15:25:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:03.427 15:25:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:03.427 15:25:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1181261 00:07:03.427 15:25:16 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:03.427 15:25:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:03.427 15:25:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1181261' 00:07:03.427 Process app_repeat pid: 1181261 00:07:03.427 15:25:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:03.427 15:25:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:03.427 spdk_app_start Round 0 00:07:03.427 15:25:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1181261 /var/tmp/spdk-nbd.sock 00:07:03.427 15:25:16 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1181261 ']' 00:07:03.427 15:25:16 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:03.427 15:25:16 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:03.427 15:25:16 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:03.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:03.427 15:25:16 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:03.427 15:25:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:03.427 [2024-05-15 15:25:16.324328] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:03.427 [2024-05-15 15:25:16.324390] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1181261 ] 00:07:03.427 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.427 [2024-05-15 15:25:16.361179] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:03.427 [2024-05-15 15:25:16.397488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:03.427 [2024-05-15 15:25:16.485026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.427 [2024-05-15 15:25:16.485030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.685 15:25:16 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:03.685 15:25:16 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:03.685 15:25:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:03.943 Malloc0 00:07:03.943 15:25:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.200 Malloc1 00:07:04.200 15:25:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.200 15:25:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.200 15:25:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.200 15:25:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:04.200 15:25:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.200 15:25:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:04.200 15:25:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.200 15:25:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.200 15:25:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.200 15:25:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:04.200 15:25:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.200 15:25:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:04.200 15:25:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:04.200 15:25:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:04.200 15:25:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.200 15:25:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:04.457 /dev/nbd0 00:07:04.457 15:25:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:04.457 15:25:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:04.457 15:25:17 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:04.457 15:25:17 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:04.457 15:25:17 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:04.457 15:25:17 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:04.457 15:25:17 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:04.457 15:25:17 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:04.457 15:25:17 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:04.457 15:25:17 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:04.457 15:25:17 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.457 1+0 records in 00:07:04.457 1+0 records out 00:07:04.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177693 s, 23.1 MB/s 00:07:04.457 15:25:17 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:04.457 15:25:17 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:04.457 15:25:17 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:04.457 15:25:17 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:04.457 15:25:17 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:04.457 15:25:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.457 15:25:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.457 15:25:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:04.715 /dev/nbd1 00:07:04.715 15:25:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:04.715 15:25:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:04.715 15:25:17 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:04.715 15:25:17 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:04.715 15:25:17 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:04.715 15:25:17 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:04.716 15:25:17 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:04.716 15:25:17 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:04.716 15:25:17 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:04.716 15:25:17 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:04.716 15:25:17 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.716 1+0 records in 00:07:04.716 1+0 records out 00:07:04.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000157617 s, 26.0 MB/s 00:07:04.716 15:25:17 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:04.716 15:25:17 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:04.716 15:25:17 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:04.716 15:25:17 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:04.716 15:25:17 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:04.716 15:25:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.716 15:25:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.716 15:25:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.716 15:25:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.716 15:25:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.974 15:25:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:04.974 { 00:07:04.974 "nbd_device": "/dev/nbd0", 00:07:04.974 "bdev_name": "Malloc0" 00:07:04.974 }, 00:07:04.974 { 00:07:04.974 "nbd_device": "/dev/nbd1", 00:07:04.974 "bdev_name": "Malloc1" 00:07:04.974 } 00:07:04.974 ]' 00:07:04.974 15:25:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:04.974 { 00:07:04.974 "nbd_device": "/dev/nbd0", 00:07:04.974 "bdev_name": "Malloc0" 00:07:04.974 }, 00:07:04.974 { 00:07:04.974 "nbd_device": "/dev/nbd1", 00:07:04.974 "bdev_name": "Malloc1" 00:07:04.974 } 00:07:04.974 ]' 00:07:04.974 15:25:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.974 15:25:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:04.974 /dev/nbd1' 00:07:04.974 15:25:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:04.974 /dev/nbd1' 00:07:04.974 15:25:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.974 15:25:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:04.974 15:25:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:04.974 15:25:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:04.974 15:25:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:04.974 15:25:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:04.974 15:25:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.974 15:25:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.974 15:25:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:04.974 15:25:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.974 15:25:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:04.974 15:25:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:04.974 256+0 records in 00:07:04.974 256+0 records out 00:07:04.974 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00520158 s, 202 MB/s 00:07:04.974 15:25:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.974 15:25:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:04.974 256+0 records in 00:07:04.974 256+0 records out 00:07:04.974 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233169 s, 45.0 MB/s 00:07:04.974 15:25:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.974 15:25:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:04.974 256+0 records in 00:07:04.974 256+0 records out 00:07:04.974 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215785 s, 48.6 MB/s 00:07:04.974 15:25:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:04.974 15:25:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.974 15:25:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.974 15:25:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:04.974 15:25:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.974 15:25:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:04.974 15:25:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:04.974 15:25:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.974 15:25:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:04.974 15:25:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.974 15:25:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:04.974 15:25:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.974 15:25:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:04.974 15:25:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.974 15:25:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.974 15:25:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.974 15:25:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:04.974 15:25:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.974 15:25:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:05.233 15:25:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:05.233 15:25:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:05.233 15:25:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:05.233 15:25:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.233 15:25:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.233 15:25:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:05.233 15:25:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:05.233 15:25:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.233 15:25:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.233 15:25:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:05.489 15:25:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:05.489 15:25:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:05.489 15:25:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:05.489 15:25:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.489 15:25:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.489 15:25:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:05.489 15:25:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:05.489 15:25:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.489 15:25:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.489 15:25:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.489 15:25:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.746 15:25:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:05.746 15:25:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:05.746 15:25:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.746 15:25:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:05.746 15:25:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:05.746 15:25:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.746 15:25:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:05.746 15:25:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:05.746 15:25:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:05.746 15:25:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:05.746 15:25:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:05.746 15:25:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:05.747 15:25:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:06.004 15:25:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:06.309 [2024-05-15 15:25:19.310006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:06.309 [2024-05-15 15:25:19.398964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.309 [2024-05-15 15:25:19.398964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.567 [2024-05-15 15:25:19.457927] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:06.567 [2024-05-15 15:25:19.457997] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:09.092 15:25:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:09.092 15:25:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:09.092 spdk_app_start Round 1 00:07:09.092 15:25:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1181261 /var/tmp/spdk-nbd.sock 00:07:09.092 15:25:22 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1181261 ']' 00:07:09.092 15:25:22 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:09.092 15:25:22 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:09.092 15:25:22 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:09.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:09.092 15:25:22 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:09.092 15:25:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:09.349 15:25:22 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:09.349 15:25:22 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:09.349 15:25:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:09.607 Malloc0 00:07:09.607 15:25:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:09.865 Malloc1 00:07:09.865 15:25:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:09.865 15:25:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.865 15:25:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:09.865 15:25:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:09.865 15:25:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.865 15:25:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:09.866 15:25:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:09.866 15:25:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.866 15:25:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:09.866 15:25:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:09.866 15:25:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.866 15:25:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:09.866 15:25:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:09.866 15:25:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:09.866 15:25:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:09.866 15:25:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:10.124 /dev/nbd0 00:07:10.124 15:25:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:10.124 15:25:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:10.124 15:25:23 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:10.124 15:25:23 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:10.124 15:25:23 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:10.124 15:25:23 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:10.124 15:25:23 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:10.124 15:25:23 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:10.124 15:25:23 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:10.124 15:25:23 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:10.124 15:25:23 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:10.124 1+0 records in 00:07:10.124 1+0 records out 00:07:10.124 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197084 s, 20.8 MB/s 00:07:10.124 15:25:23 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:10.124 15:25:23 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:10.124 15:25:23 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:10.124 15:25:23 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:10.124 15:25:23 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:10.124 15:25:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.124 15:25:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:10.124 15:25:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:10.383 /dev/nbd1 00:07:10.383 15:25:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:10.383 15:25:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:10.383 15:25:23 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:10.383 15:25:23 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:10.383 15:25:23 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:10.383 15:25:23 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:10.383 15:25:23 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:10.383 15:25:23 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:10.383 15:25:23 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:10.383 15:25:23 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:10.383 15:25:23 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:10.383 1+0 records in 00:07:10.383 1+0 records out 00:07:10.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167007 s, 24.5 MB/s 00:07:10.383 15:25:23 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:10.383 15:25:23 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:10.383 15:25:23 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:10.383 15:25:23 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:10.383 15:25:23 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:10.383 15:25:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.383 15:25:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:10.383 15:25:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:10.383 15:25:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.383 15:25:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:10.641 { 00:07:10.641 "nbd_device": "/dev/nbd0", 00:07:10.641 "bdev_name": "Malloc0" 00:07:10.641 }, 00:07:10.641 { 00:07:10.641 "nbd_device": "/dev/nbd1", 00:07:10.641 "bdev_name": "Malloc1" 00:07:10.641 } 00:07:10.641 ]' 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:10.641 { 00:07:10.641 "nbd_device": "/dev/nbd0", 00:07:10.641 "bdev_name": "Malloc0" 00:07:10.641 }, 00:07:10.641 { 00:07:10.641 "nbd_device": "/dev/nbd1", 00:07:10.641 "bdev_name": "Malloc1" 00:07:10.641 } 00:07:10.641 ]' 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:10.641 /dev/nbd1' 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:10.641 /dev/nbd1' 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:10.641 256+0 records in 00:07:10.641 256+0 records out 00:07:10.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503861 s, 208 MB/s 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:10.641 256+0 records in 00:07:10.641 256+0 records out 00:07:10.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198548 s, 52.8 MB/s 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:10.641 256+0 records in 00:07:10.641 256+0 records out 00:07:10.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024259 s, 43.2 MB/s 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.641 15:25:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:10.642 15:25:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.642 15:25:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:10.642 15:25:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:10.642 15:25:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:10.642 15:25:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.642 15:25:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.642 15:25:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:10.642 15:25:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:10.642 15:25:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.642 15:25:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:10.899 15:25:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:10.899 15:25:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:10.899 15:25:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:10.899 15:25:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.899 15:25:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.899 15:25:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:10.899 15:25:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:10.899 15:25:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.900 15:25:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.900 15:25:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:11.157 15:25:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:11.157 15:25:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:11.157 15:25:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:11.157 15:25:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.157 15:25:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.157 15:25:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:11.157 15:25:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:11.157 15:25:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.157 15:25:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.157 15:25:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.157 15:25:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.416 15:25:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:11.416 15:25:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:11.416 15:25:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.673 15:25:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:11.673 15:25:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:11.673 15:25:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.673 15:25:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:11.673 15:25:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:11.673 15:25:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:11.673 15:25:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:11.673 15:25:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:11.673 15:25:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:11.673 15:25:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:11.931 15:25:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:12.190 [2024-05-15 15:25:25.039070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:12.190 [2024-05-15 15:25:25.126179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.190 [2024-05-15 15:25:25.126184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.190 [2024-05-15 15:25:25.186158] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:12.190 [2024-05-15 15:25:25.186265] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:14.780 15:25:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:14.780 15:25:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:14.780 spdk_app_start Round 2 00:07:14.780 15:25:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1181261 /var/tmp/spdk-nbd.sock 00:07:14.780 15:25:27 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1181261 ']' 00:07:14.780 15:25:27 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:14.780 15:25:27 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:14.780 15:25:27 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:14.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:14.780 15:25:27 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:14.780 15:25:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:15.038 15:25:28 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:15.038 15:25:28 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:15.038 15:25:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:15.296 Malloc0 00:07:15.296 15:25:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:15.554 Malloc1 00:07:15.554 15:25:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:15.554 15:25:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.554 15:25:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:15.554 15:25:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:15.554 15:25:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.554 15:25:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:15.554 15:25:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:15.554 15:25:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.554 15:25:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:15.554 15:25:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:15.554 15:25:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.554 15:25:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:15.554 15:25:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:15.554 15:25:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:15.554 15:25:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:15.554 15:25:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:15.812 /dev/nbd0 00:07:15.812 15:25:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:15.812 15:25:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:15.812 15:25:28 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:15.812 15:25:28 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:15.813 15:25:28 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:15.813 15:25:28 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:15.813 15:25:28 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:15.813 15:25:28 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:15.813 15:25:28 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:15.813 15:25:28 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:15.813 15:25:28 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:15.813 1+0 records in 00:07:15.813 1+0 records out 00:07:15.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000157723 s, 26.0 MB/s 00:07:15.813 15:25:28 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:15.813 15:25:28 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:15.813 15:25:28 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:15.813 15:25:28 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:15.813 15:25:28 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:15.813 15:25:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.813 15:25:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:15.813 15:25:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:16.071 /dev/nbd1 00:07:16.071 15:25:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:16.071 15:25:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:16.071 15:25:29 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:16.071 15:25:29 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:16.071 15:25:29 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:16.071 15:25:29 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:16.071 15:25:29 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:16.071 15:25:29 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:16.071 15:25:29 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:16.071 15:25:29 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:16.071 15:25:29 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:16.071 1+0 records in 00:07:16.071 1+0 records out 00:07:16.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000149098 s, 27.5 MB/s 00:07:16.071 15:25:29 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:16.071 15:25:29 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:16.071 15:25:29 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:16.071 15:25:29 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:16.071 15:25:29 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:16.071 15:25:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.071 15:25:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:16.071 15:25:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:16.071 15:25:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.071 15:25:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:16.329 15:25:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:16.329 { 00:07:16.329 "nbd_device": "/dev/nbd0", 00:07:16.329 "bdev_name": "Malloc0" 00:07:16.329 }, 00:07:16.329 { 00:07:16.329 "nbd_device": "/dev/nbd1", 00:07:16.329 "bdev_name": "Malloc1" 00:07:16.329 } 00:07:16.329 ]' 00:07:16.329 15:25:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:16.329 { 00:07:16.329 "nbd_device": "/dev/nbd0", 00:07:16.329 "bdev_name": "Malloc0" 00:07:16.329 }, 00:07:16.330 { 00:07:16.330 "nbd_device": "/dev/nbd1", 00:07:16.330 "bdev_name": "Malloc1" 00:07:16.330 } 00:07:16.330 ]' 00:07:16.330 15:25:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:16.330 15:25:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:16.330 /dev/nbd1' 00:07:16.330 15:25:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:16.330 /dev/nbd1' 00:07:16.330 15:25:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:16.330 15:25:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:16.330 15:25:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:16.330 15:25:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:16.330 15:25:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:16.330 15:25:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:16.330 15:25:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.330 15:25:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:16.330 15:25:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:16.330 15:25:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:16.330 15:25:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:16.330 15:25:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:16.330 256+0 records in 00:07:16.330 256+0 records out 00:07:16.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00505126 s, 208 MB/s 00:07:16.330 15:25:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.330 15:25:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:16.330 256+0 records in 00:07:16.330 256+0 records out 00:07:16.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229894 s, 45.6 MB/s 00:07:16.330 15:25:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.330 15:25:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:16.587 256+0 records in 00:07:16.587 256+0 records out 00:07:16.587 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240747 s, 43.6 MB/s 00:07:16.587 15:25:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:16.587 15:25:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.587 15:25:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:16.587 15:25:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:16.587 15:25:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:16.587 15:25:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:16.587 15:25:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:16.587 15:25:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.587 15:25:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:16.587 15:25:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.587 15:25:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:16.587 15:25:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:16.587 15:25:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:16.587 15:25:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.587 15:25:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.587 15:25:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:16.587 15:25:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:16.587 15:25:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.587 15:25:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:16.845 15:25:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:16.845 15:25:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:16.845 15:25:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:16.845 15:25:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.845 15:25:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.845 15:25:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:16.845 15:25:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:16.845 15:25:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.845 15:25:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.845 15:25:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:17.103 15:25:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:17.103 15:25:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:17.103 15:25:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:17.103 15:25:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.103 15:25:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.103 15:25:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:17.103 15:25:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:17.103 15:25:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.103 15:25:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.103 15:25:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.103 15:25:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.361 15:25:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:17.361 15:25:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:17.361 15:25:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.361 15:25:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:17.361 15:25:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:17.361 15:25:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.361 15:25:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:17.361 15:25:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:17.361 15:25:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:17.361 15:25:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:17.361 15:25:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:17.361 15:25:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:17.361 15:25:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:17.618 15:25:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:17.876 [2024-05-15 15:25:30.769471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:17.876 [2024-05-15 15:25:30.853741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.876 [2024-05-15 15:25:30.853741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.876 [2024-05-15 15:25:30.917019] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:17.876 [2024-05-15 15:25:30.917100] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:21.155 15:25:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1181261 /var/tmp/spdk-nbd.sock 00:07:21.155 15:25:33 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1181261 ']' 00:07:21.155 15:25:33 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:21.155 15:25:33 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:21.155 15:25:33 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:21.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:21.155 15:25:33 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:21.155 15:25:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:21.155 15:25:33 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:21.155 15:25:33 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:21.155 15:25:33 event.app_repeat -- event/event.sh@39 -- # killprocess 1181261 00:07:21.155 15:25:33 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 1181261 ']' 00:07:21.155 15:25:33 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 1181261 00:07:21.155 15:25:33 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:07:21.155 15:25:33 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:21.155 15:25:33 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1181261 00:07:21.155 15:25:33 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:21.156 15:25:33 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:21.156 15:25:33 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1181261' 00:07:21.156 killing process with pid 1181261 00:07:21.156 15:25:33 event.app_repeat -- common/autotest_common.sh@965 -- # kill 1181261 00:07:21.156 15:25:33 event.app_repeat -- common/autotest_common.sh@970 -- # wait 1181261 00:07:21.156 spdk_app_start is called in Round 0. 00:07:21.156 Shutdown signal received, stop current app iteration 00:07:21.156 Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 reinitialization... 00:07:21.156 spdk_app_start is called in Round 1. 00:07:21.156 Shutdown signal received, stop current app iteration 00:07:21.156 Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 reinitialization... 00:07:21.156 spdk_app_start is called in Round 2. 00:07:21.156 Shutdown signal received, stop current app iteration 00:07:21.156 Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 reinitialization... 00:07:21.156 spdk_app_start is called in Round 3. 00:07:21.156 Shutdown signal received, stop current app iteration 00:07:21.156 15:25:34 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:21.156 15:25:34 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:21.156 00:07:21.156 real 0m17.725s 00:07:21.156 user 0m38.948s 00:07:21.156 sys 0m3.309s 00:07:21.156 15:25:34 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.156 15:25:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:21.156 ************************************ 00:07:21.156 END TEST app_repeat 00:07:21.156 ************************************ 00:07:21.156 15:25:34 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:21.156 15:25:34 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:21.156 15:25:34 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:21.156 15:25:34 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.156 15:25:34 event -- common/autotest_common.sh@10 -- # set +x 00:07:21.156 ************************************ 00:07:21.156 START TEST cpu_locks 00:07:21.156 ************************************ 00:07:21.156 15:25:34 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:21.156 * Looking for test storage... 00:07:21.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:21.156 15:25:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:21.156 15:25:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:21.156 15:25:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:21.156 15:25:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:21.156 15:25:34 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:21.156 15:25:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.156 15:25:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.156 ************************************ 00:07:21.156 START TEST default_locks 00:07:21.156 ************************************ 00:07:21.156 15:25:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:07:21.156 15:25:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1183612 00:07:21.156 15:25:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:21.156 15:25:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1183612 00:07:21.156 15:25:34 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 1183612 ']' 00:07:21.156 15:25:34 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.156 15:25:34 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:21.156 15:25:34 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.156 15:25:34 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:21.156 15:25:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.156 [2024-05-15 15:25:34.198475] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:21.156 [2024-05-15 15:25:34.198564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1183612 ] 00:07:21.156 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.156 [2024-05-15 15:25:34.239929] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:21.414 [2024-05-15 15:25:34.275043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.414 [2024-05-15 15:25:34.361104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.671 15:25:34 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:21.671 15:25:34 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:07:21.671 15:25:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1183612 00:07:21.671 15:25:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1183612 00:07:21.671 15:25:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:21.929 lslocks: write error 00:07:21.929 15:25:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1183612 00:07:21.929 15:25:34 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 1183612 ']' 00:07:21.929 15:25:34 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 1183612 00:07:21.929 15:25:34 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:07:21.929 15:25:34 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:21.929 15:25:34 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1183612 00:07:21.929 15:25:34 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:21.929 15:25:34 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:21.929 15:25:34 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1183612' 00:07:21.929 killing process with pid 1183612 00:07:21.929 15:25:34 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 1183612 00:07:21.929 15:25:34 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 1183612 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1183612 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1183612 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1183612 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 1183612 ']' 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (1183612) - No such process 00:07:22.495 ERROR: process (pid: 1183612) is no longer running 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:22.495 00:07:22.495 real 0m1.157s 00:07:22.495 user 0m1.050s 00:07:22.495 sys 0m0.541s 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.495 15:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.495 ************************************ 00:07:22.495 END TEST default_locks 00:07:22.495 ************************************ 00:07:22.495 15:25:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:22.495 15:25:35 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:22.495 15:25:35 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.495 15:25:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.495 ************************************ 00:07:22.495 START TEST default_locks_via_rpc 00:07:22.495 ************************************ 00:07:22.495 15:25:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:07:22.495 15:25:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1183777 00:07:22.495 15:25:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:22.495 15:25:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1183777 00:07:22.495 15:25:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1183777 ']' 00:07:22.495 15:25:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.495 15:25:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:22.495 15:25:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.495 15:25:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:22.495 15:25:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.495 [2024-05-15 15:25:35.413283] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:22.495 [2024-05-15 15:25:35.413374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1183777 ] 00:07:22.495 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.495 [2024-05-15 15:25:35.450192] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:22.495 [2024-05-15 15:25:35.481084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.495 [2024-05-15 15:25:35.566611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.753 15:25:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:22.753 15:25:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:22.753 15:25:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:22.753 15:25:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.753 15:25:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.753 15:25:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.753 15:25:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:22.753 15:25:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:22.753 15:25:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:22.753 15:25:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:22.753 15:25:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:22.753 15:25:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.753 15:25:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.753 15:25:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.753 15:25:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1183777 00:07:22.753 15:25:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1183777 00:07:22.753 15:25:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:23.318 15:25:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1183777 00:07:23.318 15:25:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 1183777 ']' 00:07:23.318 15:25:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 1183777 00:07:23.318 15:25:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:07:23.318 15:25:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:23.318 15:25:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1183777 00:07:23.318 15:25:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:23.318 15:25:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:23.318 15:25:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1183777' 00:07:23.318 killing process with pid 1183777 00:07:23.318 15:25:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 1183777 00:07:23.318 15:25:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 1183777 00:07:23.577 00:07:23.577 real 0m1.201s 00:07:23.577 user 0m1.136s 00:07:23.577 sys 0m0.529s 00:07:23.577 15:25:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.577 15:25:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.577 ************************************ 00:07:23.577 END TEST default_locks_via_rpc 00:07:23.577 ************************************ 00:07:23.577 15:25:36 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:23.577 15:25:36 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:23.577 15:25:36 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.577 15:25:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.577 ************************************ 00:07:23.577 START TEST non_locking_app_on_locked_coremask 00:07:23.577 ************************************ 00:07:23.577 15:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:07:23.577 15:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1183951 00:07:23.577 15:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:23.577 15:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1183951 /var/tmp/spdk.sock 00:07:23.577 15:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1183951 ']' 00:07:23.577 15:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.577 15:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:23.577 15:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.577 15:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:23.577 15:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.577 [2024-05-15 15:25:36.666678] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:23.577 [2024-05-15 15:25:36.666777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1183951 ] 00:07:23.835 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.835 [2024-05-15 15:25:36.702476] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:23.835 [2024-05-15 15:25:36.739635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.835 [2024-05-15 15:25:36.826606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.093 15:25:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:24.093 15:25:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:24.093 15:25:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1183964 00:07:24.093 15:25:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:24.093 15:25:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1183964 /var/tmp/spdk2.sock 00:07:24.093 15:25:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1183964 ']' 00:07:24.093 15:25:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:24.093 15:25:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:24.093 15:25:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:24.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:24.093 15:25:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:24.093 15:25:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.093 [2024-05-15 15:25:37.136342] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:24.093 [2024-05-15 15:25:37.136434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1183964 ] 00:07:24.093 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.093 [2024-05-15 15:25:37.174057] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:24.352 [2024-05-15 15:25:37.237975] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:24.352 [2024-05-15 15:25:37.238002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.352 [2024-05-15 15:25:37.414568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.285 15:25:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:25.286 15:25:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:25.286 15:25:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1183951 00:07:25.286 15:25:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1183951 00:07:25.286 15:25:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:25.542 lslocks: write error 00:07:25.542 15:25:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1183951 00:07:25.542 15:25:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1183951 ']' 00:07:25.542 15:25:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 1183951 00:07:25.542 15:25:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:25.542 15:25:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:25.542 15:25:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1183951 00:07:25.542 15:25:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:25.542 15:25:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:25.542 15:25:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1183951' 00:07:25.542 killing process with pid 1183951 00:07:25.542 15:25:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 1183951 00:07:25.542 15:25:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 1183951 00:07:26.475 15:25:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1183964 00:07:26.475 15:25:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1183964 ']' 00:07:26.475 15:25:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 1183964 00:07:26.475 15:25:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:26.475 15:25:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:26.475 15:25:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1183964 00:07:26.475 15:25:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:26.475 15:25:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:26.475 15:25:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1183964' 00:07:26.475 killing process with pid 1183964 00:07:26.475 15:25:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 1183964 00:07:26.475 15:25:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 1183964 00:07:26.733 00:07:26.733 real 0m3.180s 00:07:26.733 user 0m3.301s 00:07:26.733 sys 0m1.055s 00:07:26.733 15:25:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.733 15:25:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.733 ************************************ 00:07:26.733 END TEST non_locking_app_on_locked_coremask 00:07:26.733 ************************************ 00:07:26.733 15:25:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:26.733 15:25:39 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:26.733 15:25:39 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.733 15:25:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.991 ************************************ 00:07:26.991 START TEST locking_app_on_unlocked_coremask 00:07:26.991 ************************************ 00:07:26.991 15:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:07:26.991 15:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1184393 00:07:26.991 15:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:26.991 15:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1184393 /var/tmp/spdk.sock 00:07:26.991 15:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1184393 ']' 00:07:26.991 15:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.991 15:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:26.991 15:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.991 15:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:26.991 15:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.991 [2024-05-15 15:25:39.902404] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:26.991 [2024-05-15 15:25:39.902492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1184393 ] 00:07:26.991 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.991 [2024-05-15 15:25:39.938442] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:26.991 [2024-05-15 15:25:39.975282] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:26.991 [2024-05-15 15:25:39.975313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.991 [2024-05-15 15:25:40.066287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.249 15:25:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:27.249 15:25:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:27.249 15:25:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1184397 00:07:27.249 15:25:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:27.249 15:25:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1184397 /var/tmp/spdk2.sock 00:07:27.249 15:25:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1184397 ']' 00:07:27.249 15:25:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:27.249 15:25:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:27.249 15:25:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:27.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:27.249 15:25:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:27.249 15:25:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.507 [2024-05-15 15:25:40.381288] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:27.507 [2024-05-15 15:25:40.381381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1184397 ] 00:07:27.507 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.507 [2024-05-15 15:25:40.421000] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:27.507 [2024-05-15 15:25:40.494202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.764 [2024-05-15 15:25:40.675909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.329 15:25:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:28.329 15:25:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:28.329 15:25:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1184397 00:07:28.329 15:25:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1184397 00:07:28.329 15:25:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:28.587 lslocks: write error 00:07:28.587 15:25:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1184393 00:07:28.587 15:25:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1184393 ']' 00:07:28.587 15:25:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 1184393 00:07:28.587 15:25:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:28.587 15:25:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:28.587 15:25:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1184393 00:07:28.845 15:25:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:28.845 15:25:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:28.845 15:25:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1184393' 00:07:28.845 killing process with pid 1184393 00:07:28.845 15:25:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 1184393 00:07:28.845 15:25:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 1184393 00:07:29.816 15:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1184397 00:07:29.816 15:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1184397 ']' 00:07:29.816 15:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 1184397 00:07:29.816 15:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:29.816 15:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:29.816 15:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1184397 00:07:29.816 15:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:29.816 15:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:29.816 15:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1184397' 00:07:29.816 killing process with pid 1184397 00:07:29.816 15:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 1184397 00:07:29.816 15:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 1184397 00:07:30.073 00:07:30.073 real 0m3.108s 00:07:30.073 user 0m3.226s 00:07:30.073 sys 0m1.059s 00:07:30.073 15:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.073 15:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.073 ************************************ 00:07:30.073 END TEST locking_app_on_unlocked_coremask 00:07:30.073 ************************************ 00:07:30.073 15:25:42 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:30.073 15:25:42 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:30.073 15:25:42 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.073 15:25:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.073 ************************************ 00:07:30.073 START TEST locking_app_on_locked_coremask 00:07:30.073 ************************************ 00:07:30.073 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:07:30.073 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1184734 00:07:30.073 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:30.073 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1184734 /var/tmp/spdk.sock 00:07:30.073 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1184734 ']' 00:07:30.073 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.073 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:30.073 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.073 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:30.073 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.073 [2024-05-15 15:25:43.070049] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:30.073 [2024-05-15 15:25:43.070140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1184734 ] 00:07:30.073 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.073 [2024-05-15 15:25:43.107835] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:30.073 [2024-05-15 15:25:43.143839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.330 [2024-05-15 15:25:43.231528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.588 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:30.588 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:30.588 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1184833 00:07:30.588 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:30.588 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1184833 /var/tmp/spdk2.sock 00:07:30.588 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:30.588 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1184833 /var/tmp/spdk2.sock 00:07:30.588 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:30.588 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.588 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:30.588 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.588 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1184833 /var/tmp/spdk2.sock 00:07:30.588 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1184833 ']' 00:07:30.588 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:30.588 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:30.588 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:30.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:30.588 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:30.588 15:25:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.588 [2024-05-15 15:25:43.545287] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:30.588 [2024-05-15 15:25:43.545384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1184833 ] 00:07:30.588 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.588 [2024-05-15 15:25:43.583015] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:30.588 [2024-05-15 15:25:43.656325] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1184734 has claimed it. 00:07:30.588 [2024-05-15 15:25:43.656380] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:31.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (1184833) - No such process 00:07:31.152 ERROR: process (pid: 1184833) is no longer running 00:07:31.152 15:25:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:31.152 15:25:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:31.152 15:25:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:31.152 15:25:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:31.152 15:25:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:31.152 15:25:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:31.152 15:25:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1184734 00:07:31.152 15:25:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1184734 00:07:31.152 15:25:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:31.409 lslocks: write error 00:07:31.409 15:25:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1184734 00:07:31.409 15:25:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1184734 ']' 00:07:31.409 15:25:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 1184734 00:07:31.409 15:25:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:31.409 15:25:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:31.409 15:25:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1184734 00:07:31.409 15:25:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:31.409 15:25:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:31.409 15:25:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1184734' 00:07:31.409 killing process with pid 1184734 00:07:31.409 15:25:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 1184734 00:07:31.409 15:25:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 1184734 00:07:31.973 00:07:31.973 real 0m1.875s 00:07:31.973 user 0m2.007s 00:07:31.973 sys 0m0.619s 00:07:31.973 15:25:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:31.973 15:25:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.973 ************************************ 00:07:31.973 END TEST locking_app_on_locked_coremask 00:07:31.973 ************************************ 00:07:31.973 15:25:44 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:31.973 15:25:44 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:31.973 15:25:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:31.973 15:25:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:31.973 ************************************ 00:07:31.973 START TEST locking_overlapped_coremask 00:07:31.973 ************************************ 00:07:31.973 15:25:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:07:31.973 15:25:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1185003 00:07:31.973 15:25:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:31.973 15:25:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1185003 /var/tmp/spdk.sock 00:07:31.973 15:25:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 1185003 ']' 00:07:31.973 15:25:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.973 15:25:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:31.973 15:25:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.973 15:25:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:31.973 15:25:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.973 [2024-05-15 15:25:44.995399] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:31.973 [2024-05-15 15:25:44.995476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185003 ] 00:07:31.973 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.974 [2024-05-15 15:25:45.032866] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:31.974 [2024-05-15 15:25:45.071474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:32.231 [2024-05-15 15:25:45.164289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.231 [2024-05-15 15:25:45.164312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.231 [2024-05-15 15:25:45.164314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.489 15:25:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:32.489 15:25:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:32.489 15:25:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1185132 00:07:32.489 15:25:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1185132 /var/tmp/spdk2.sock 00:07:32.490 15:25:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:32.490 15:25:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:32.490 15:25:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1185132 /var/tmp/spdk2.sock 00:07:32.490 15:25:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:32.490 15:25:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:32.490 15:25:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:32.490 15:25:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:32.490 15:25:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1185132 /var/tmp/spdk2.sock 00:07:32.490 15:25:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 1185132 ']' 00:07:32.490 15:25:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:32.490 15:25:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:32.490 15:25:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:32.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:32.490 15:25:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:32.490 15:25:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.490 [2024-05-15 15:25:45.456846] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:32.490 [2024-05-15 15:25:45.456947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185132 ] 00:07:32.490 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.490 [2024-05-15 15:25:45.497300] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:32.490 [2024-05-15 15:25:45.560560] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1185003 has claimed it. 00:07:32.490 [2024-05-15 15:25:45.560609] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:33.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (1185132) - No such process 00:07:33.054 ERROR: process (pid: 1185132) is no longer running 00:07:33.054 15:25:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:33.054 15:25:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:33.054 15:25:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:33.054 15:25:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:33.055 15:25:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:33.055 15:25:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:33.055 15:25:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:33.055 15:25:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:33.055 15:25:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:33.055 15:25:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:33.055 15:25:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1185003 00:07:33.055 15:25:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 1185003 ']' 00:07:33.055 15:25:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 1185003 00:07:33.055 15:25:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:07:33.055 15:25:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:33.055 15:25:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1185003 00:07:33.312 15:25:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:33.312 15:25:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:33.312 15:25:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1185003' 00:07:33.312 killing process with pid 1185003 00:07:33.312 15:25:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 1185003 00:07:33.312 15:25:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 1185003 00:07:33.570 00:07:33.570 real 0m1.623s 00:07:33.570 user 0m4.357s 00:07:33.570 sys 0m0.471s 00:07:33.570 15:25:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.570 15:25:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.570 ************************************ 00:07:33.570 END TEST locking_overlapped_coremask 00:07:33.570 ************************************ 00:07:33.570 15:25:46 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:33.570 15:25:46 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:33.570 15:25:46 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.570 15:25:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:33.570 ************************************ 00:07:33.570 START TEST locking_overlapped_coremask_via_rpc 00:07:33.570 ************************************ 00:07:33.570 15:25:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:07:33.570 15:25:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1185295 00:07:33.570 15:25:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:33.570 15:25:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1185295 /var/tmp/spdk.sock 00:07:33.570 15:25:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1185295 ']' 00:07:33.570 15:25:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.570 15:25:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:33.570 15:25:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.570 15:25:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:33.570 15:25:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.828 [2024-05-15 15:25:46.671681] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:33.828 [2024-05-15 15:25:46.671761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185295 ] 00:07:33.828 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.828 [2024-05-15 15:25:46.708771] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:33.828 [2024-05-15 15:25:46.738896] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:33.828 [2024-05-15 15:25:46.738921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:33.828 [2024-05-15 15:25:46.825597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.828 [2024-05-15 15:25:46.825660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.828 [2024-05-15 15:25:46.825662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.087 15:25:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:34.087 15:25:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:34.087 15:25:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1185307 00:07:34.087 15:25:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1185307 /var/tmp/spdk2.sock 00:07:34.087 15:25:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1185307 ']' 00:07:34.087 15:25:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:34.087 15:25:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:34.087 15:25:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:34.087 15:25:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:34.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:34.087 15:25:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:34.087 15:25:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.087 [2024-05-15 15:25:47.131500] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:34.087 [2024-05-15 15:25:47.131601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185307 ] 00:07:34.087 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.087 [2024-05-15 15:25:47.170246] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:34.346 [2024-05-15 15:25:47.233760] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:34.346 [2024-05-15 15:25:47.233786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:34.346 [2024-05-15 15:25:47.402694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.346 [2024-05-15 15:25:47.406250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:34.346 [2024-05-15 15:25:47.406252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.279 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:35.279 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:35.279 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:35.279 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.279 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.279 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.279 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:35.279 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:35.279 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:35.279 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:35.279 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.279 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:35.279 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.279 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:35.279 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.279 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.280 [2024-05-15 15:25:48.081336] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1185295 has claimed it. 00:07:35.280 request: 00:07:35.280 { 00:07:35.280 "method": "framework_enable_cpumask_locks", 00:07:35.280 "req_id": 1 00:07:35.280 } 00:07:35.280 Got JSON-RPC error response 00:07:35.280 response: 00:07:35.280 { 00:07:35.280 "code": -32603, 00:07:35.280 "message": "Failed to claim CPU core: 2" 00:07:35.280 } 00:07:35.280 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:35.280 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:35.280 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:35.280 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:35.280 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:35.280 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1185295 /var/tmp/spdk.sock 00:07:35.280 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1185295 ']' 00:07:35.280 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.280 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:35.280 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.280 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:35.280 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.280 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:35.280 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:35.280 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1185307 /var/tmp/spdk2.sock 00:07:35.280 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1185307 ']' 00:07:35.280 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:35.280 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:35.280 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:35.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:35.280 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:35.280 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.537 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:35.537 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:35.537 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:35.537 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:35.537 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:35.537 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:35.537 00:07:35.537 real 0m1.970s 00:07:35.537 user 0m1.012s 00:07:35.537 sys 0m0.185s 00:07:35.537 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:35.537 15:25:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.537 ************************************ 00:07:35.537 END TEST locking_overlapped_coremask_via_rpc 00:07:35.537 ************************************ 00:07:35.537 15:25:48 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:35.537 15:25:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1185295 ]] 00:07:35.537 15:25:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1185295 00:07:35.537 15:25:48 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1185295 ']' 00:07:35.537 15:25:48 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1185295 00:07:35.537 15:25:48 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:35.537 15:25:48 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:35.537 15:25:48 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1185295 00:07:35.794 15:25:48 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:35.794 15:25:48 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:35.794 15:25:48 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1185295' 00:07:35.794 killing process with pid 1185295 00:07:35.794 15:25:48 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 1185295 00:07:35.794 15:25:48 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 1185295 00:07:36.052 15:25:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1185307 ]] 00:07:36.052 15:25:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1185307 00:07:36.052 15:25:49 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1185307 ']' 00:07:36.052 15:25:49 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1185307 00:07:36.052 15:25:49 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:36.052 15:25:49 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:36.052 15:25:49 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1185307 00:07:36.052 15:25:49 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:07:36.052 15:25:49 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:07:36.052 15:25:49 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1185307' 00:07:36.052 killing process with pid 1185307 00:07:36.052 15:25:49 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 1185307 00:07:36.052 15:25:49 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 1185307 00:07:36.618 15:25:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:36.618 15:25:49 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:36.618 15:25:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1185295 ]] 00:07:36.618 15:25:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1185295 00:07:36.618 15:25:49 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1185295 ']' 00:07:36.618 15:25:49 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1185295 00:07:36.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1185295) - No such process 00:07:36.618 15:25:49 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 1185295 is not found' 00:07:36.618 Process with pid 1185295 is not found 00:07:36.619 15:25:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1185307 ]] 00:07:36.619 15:25:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1185307 00:07:36.619 15:25:49 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1185307 ']' 00:07:36.619 15:25:49 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1185307 00:07:36.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1185307) - No such process 00:07:36.619 15:25:49 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 1185307 is not found' 00:07:36.619 Process with pid 1185307 is not found 00:07:36.619 15:25:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:36.619 00:07:36.619 real 0m15.395s 00:07:36.619 user 0m26.795s 00:07:36.619 sys 0m5.360s 00:07:36.619 15:25:49 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:36.619 15:25:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.619 ************************************ 00:07:36.619 END TEST cpu_locks 00:07:36.619 ************************************ 00:07:36.619 00:07:36.619 real 0m41.443s 00:07:36.619 user 1m19.203s 00:07:36.619 sys 0m9.550s 00:07:36.619 15:25:49 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:36.619 15:25:49 event -- common/autotest_common.sh@10 -- # set +x 00:07:36.619 ************************************ 00:07:36.619 END TEST event 00:07:36.619 ************************************ 00:07:36.619 15:25:49 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:36.619 15:25:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:36.619 15:25:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.619 15:25:49 -- common/autotest_common.sh@10 -- # set +x 00:07:36.619 ************************************ 00:07:36.619 START TEST thread 00:07:36.619 ************************************ 00:07:36.619 15:25:49 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:36.619 * Looking for test storage... 00:07:36.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:36.619 15:25:49 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:36.619 15:25:49 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:36.619 15:25:49 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.619 15:25:49 thread -- common/autotest_common.sh@10 -- # set +x 00:07:36.619 ************************************ 00:07:36.619 START TEST thread_poller_perf 00:07:36.619 ************************************ 00:07:36.619 15:25:49 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:36.619 [2024-05-15 15:25:49.645586] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:36.619 [2024-05-15 15:25:49.645649] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185672 ] 00:07:36.619 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.619 [2024-05-15 15:25:49.683935] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:36.619 [2024-05-15 15:25:49.715984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.877 [2024-05-15 15:25:49.802289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.877 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:37.810 ====================================== 00:07:37.810 busy:2711432084 (cyc) 00:07:37.810 total_run_count: 293000 00:07:37.810 tsc_hz: 2700000000 (cyc) 00:07:37.810 ====================================== 00:07:37.810 poller_cost: 9254 (cyc), 3427 (nsec) 00:07:37.810 00:07:37.810 real 0m1.257s 00:07:37.810 user 0m1.157s 00:07:37.810 sys 0m0.094s 00:07:37.810 15:25:50 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:37.810 15:25:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:37.810 ************************************ 00:07:37.810 END TEST thread_poller_perf 00:07:37.810 ************************************ 00:07:38.068 15:25:50 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:38.068 15:25:50 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:38.068 15:25:50 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:38.068 15:25:50 thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.068 ************************************ 00:07:38.068 START TEST thread_poller_perf 00:07:38.068 ************************************ 00:07:38.068 15:25:50 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:38.068 [2024-05-15 15:25:50.957467] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:38.068 [2024-05-15 15:25:50.957542] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185833 ] 00:07:38.068 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.068 [2024-05-15 15:25:50.995631] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:38.068 [2024-05-15 15:25:51.033423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.068 [2024-05-15 15:25:51.121131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.068 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:39.439 ====================================== 00:07:39.439 busy:2702306834 (cyc) 00:07:39.439 total_run_count: 3865000 00:07:39.439 tsc_hz: 2700000000 (cyc) 00:07:39.439 ====================================== 00:07:39.439 poller_cost: 699 (cyc), 258 (nsec) 00:07:39.439 00:07:39.439 real 0m1.258s 00:07:39.439 user 0m1.156s 00:07:39.439 sys 0m0.097s 00:07:39.439 15:25:52 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.439 15:25:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:39.439 ************************************ 00:07:39.439 END TEST thread_poller_perf 00:07:39.439 ************************************ 00:07:39.439 15:25:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:39.439 00:07:39.439 real 0m2.668s 00:07:39.439 user 0m2.377s 00:07:39.439 sys 0m0.284s 00:07:39.439 15:25:52 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.439 15:25:52 thread -- common/autotest_common.sh@10 -- # set +x 00:07:39.439 ************************************ 00:07:39.439 END TEST thread 00:07:39.439 ************************************ 00:07:39.439 15:25:52 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:39.439 15:25:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:39.439 15:25:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.439 15:25:52 -- common/autotest_common.sh@10 -- # set +x 00:07:39.439 ************************************ 00:07:39.439 START TEST accel 00:07:39.439 ************************************ 00:07:39.439 15:25:52 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:39.439 * Looking for test storage... 00:07:39.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:39.439 15:25:52 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:39.439 15:25:52 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:39.439 15:25:52 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:39.439 15:25:52 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1186145 00:07:39.439 15:25:52 accel -- accel/accel.sh@63 -- # waitforlisten 1186145 00:07:39.439 15:25:52 accel -- common/autotest_common.sh@827 -- # '[' -z 1186145 ']' 00:07:39.439 15:25:52 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:39.439 15:25:52 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.439 15:25:52 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:39.439 15:25:52 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:39.439 15:25:52 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.439 15:25:52 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.439 15:25:52 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.439 15:25:52 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:39.439 15:25:52 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.439 15:25:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.439 15:25:52 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.439 15:25:52 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.439 15:25:52 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:39.439 15:25:52 accel -- accel/accel.sh@41 -- # jq -r . 00:07:39.439 [2024-05-15 15:25:52.375844] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:39.439 [2024-05-15 15:25:52.375924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186145 ] 00:07:39.439 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.439 [2024-05-15 15:25:52.411782] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:39.439 [2024-05-15 15:25:52.442798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.439 [2024-05-15 15:25:52.523619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.697 15:25:52 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:39.697 15:25:52 accel -- common/autotest_common.sh@860 -- # return 0 00:07:39.697 15:25:52 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:39.697 15:25:52 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:39.697 15:25:52 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:39.697 15:25:52 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:39.697 15:25:52 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:39.697 15:25:52 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:39.697 15:25:52 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.697 15:25:52 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:39.697 15:25:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.697 15:25:52 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.955 15:25:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.955 15:25:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.955 15:25:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.955 15:25:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.955 15:25:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.955 15:25:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.955 15:25:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.955 15:25:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.955 15:25:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.955 15:25:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.955 15:25:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.955 15:25:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.955 15:25:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.955 15:25:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.955 15:25:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.955 15:25:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.955 15:25:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.955 15:25:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.955 15:25:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.955 15:25:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.955 15:25:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.955 15:25:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.955 15:25:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.955 15:25:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.955 15:25:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.955 15:25:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.955 15:25:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.955 15:25:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.955 15:25:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.955 15:25:52 accel -- accel/accel.sh@75 -- # killprocess 1186145 00:07:39.955 15:25:52 accel -- common/autotest_common.sh@946 -- # '[' -z 1186145 ']' 00:07:39.955 15:25:52 accel -- common/autotest_common.sh@950 -- # kill -0 1186145 00:07:39.955 15:25:52 accel -- common/autotest_common.sh@951 -- # uname 00:07:39.955 15:25:52 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:39.955 15:25:52 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1186145 00:07:39.955 15:25:52 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:39.955 15:25:52 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:39.955 15:25:52 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1186145' 00:07:39.955 killing process with pid 1186145 00:07:39.955 15:25:52 accel -- common/autotest_common.sh@965 -- # kill 1186145 00:07:39.955 15:25:52 accel -- common/autotest_common.sh@970 -- # wait 1186145 00:07:40.213 15:25:53 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:40.213 15:25:53 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:40.213 15:25:53 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:40.213 15:25:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:40.213 15:25:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.213 15:25:53 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:07:40.213 15:25:53 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:40.213 15:25:53 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:40.213 15:25:53 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.213 15:25:53 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.213 15:25:53 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.213 15:25:53 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.213 15:25:53 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.213 15:25:53 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:40.213 15:25:53 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:40.472 15:25:53 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:40.472 15:25:53 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:40.472 15:25:53 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:40.472 15:25:53 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:40.472 15:25:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:40.472 15:25:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.472 ************************************ 00:07:40.472 START TEST accel_missing_filename 00:07:40.472 ************************************ 00:07:40.472 15:25:53 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:07:40.472 15:25:53 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:40.472 15:25:53 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:40.472 15:25:53 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:40.472 15:25:53 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:40.472 15:25:53 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:40.472 15:25:53 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:40.472 15:25:53 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:40.472 15:25:53 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:40.472 15:25:53 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:40.472 15:25:53 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.472 15:25:53 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.472 15:25:53 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.472 15:25:53 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.472 15:25:53 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.472 15:25:53 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:40.472 15:25:53 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:40.472 [2024-05-15 15:25:53.379610] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:40.472 [2024-05-15 15:25:53.379661] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186315 ] 00:07:40.472 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.472 [2024-05-15 15:25:53.414600] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:40.472 [2024-05-15 15:25:53.445917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.472 [2024-05-15 15:25:53.535467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.731 [2024-05-15 15:25:53.597917] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.731 [2024-05-15 15:25:53.687023] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:07:40.731 A filename is required. 00:07:40.731 15:25:53 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:40.731 15:25:53 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:40.731 15:25:53 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:40.731 15:25:53 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:40.731 15:25:53 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:40.731 15:25:53 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:40.731 00:07:40.731 real 0m0.408s 00:07:40.731 user 0m0.282s 00:07:40.731 sys 0m0.157s 00:07:40.731 15:25:53 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:40.731 15:25:53 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:40.731 ************************************ 00:07:40.731 END TEST accel_missing_filename 00:07:40.731 ************************************ 00:07:40.731 15:25:53 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:40.731 15:25:53 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:40.731 15:25:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:40.731 15:25:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.731 ************************************ 00:07:40.731 START TEST accel_compress_verify 00:07:40.731 ************************************ 00:07:40.731 15:25:53 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:40.731 15:25:53 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:40.731 15:25:53 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:40.731 15:25:53 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:40.731 15:25:53 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:40.731 15:25:53 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:40.731 15:25:53 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:40.731 15:25:53 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:40.731 15:25:53 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:40.731 15:25:53 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:40.731 15:25:53 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.731 15:25:53 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.731 15:25:53 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.731 15:25:53 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.731 15:25:53 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.731 15:25:53 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:40.731 15:25:53 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:40.989 [2024-05-15 15:25:53.843311] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:40.989 [2024-05-15 15:25:53.843370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186337 ] 00:07:40.989 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.989 [2024-05-15 15:25:53.879618] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:40.989 [2024-05-15 15:25:53.917178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.989 [2024-05-15 15:25:54.005536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.989 [2024-05-15 15:25:54.068116] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.247 [2024-05-15 15:25:54.152767] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:07:41.247 00:07:41.247 Compression does not support the verify option, aborting. 00:07:41.247 15:25:54 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:41.247 15:25:54 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:41.247 15:25:54 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:41.247 15:25:54 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:41.247 15:25:54 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:41.247 15:25:54 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:41.247 00:07:41.247 real 0m0.407s 00:07:41.247 user 0m0.284s 00:07:41.247 sys 0m0.155s 00:07:41.247 15:25:54 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.247 15:25:54 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:41.247 ************************************ 00:07:41.247 END TEST accel_compress_verify 00:07:41.247 ************************************ 00:07:41.247 15:25:54 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:41.247 15:25:54 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:41.247 15:25:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.247 15:25:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.247 ************************************ 00:07:41.247 START TEST accel_wrong_workload 00:07:41.247 ************************************ 00:07:41.247 15:25:54 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:07:41.247 15:25:54 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:41.247 15:25:54 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:41.247 15:25:54 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:41.247 15:25:54 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:41.247 15:25:54 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:41.247 15:25:54 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:41.247 15:25:54 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:41.247 15:25:54 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:41.247 15:25:54 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:41.247 15:25:54 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.247 15:25:54 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.247 15:25:54 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.247 15:25:54 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.247 15:25:54 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.247 15:25:54 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:41.247 15:25:54 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:41.247 Unsupported workload type: foobar 00:07:41.247 [2024-05-15 15:25:54.297637] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:41.247 accel_perf options: 00:07:41.247 [-h help message] 00:07:41.247 [-q queue depth per core] 00:07:41.247 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:41.247 [-T number of threads per core 00:07:41.247 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:41.247 [-t time in seconds] 00:07:41.247 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:41.247 [ dif_verify, , dif_generate, dif_generate_copy 00:07:41.247 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:41.247 [-l for compress/decompress workloads, name of uncompressed input file 00:07:41.247 [-S for crc32c workload, use this seed value (default 0) 00:07:41.247 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:41.247 [-f for fill workload, use this BYTE value (default 255) 00:07:41.247 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:41.247 [-y verify result if this switch is on] 00:07:41.247 [-a tasks to allocate per core (default: same value as -q)] 00:07:41.247 Can be used to spread operations across a wider range of memory. 00:07:41.247 15:25:54 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:41.247 15:25:54 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:41.247 15:25:54 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:41.247 15:25:54 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:41.247 00:07:41.247 real 0m0.021s 00:07:41.247 user 0m0.012s 00:07:41.247 sys 0m0.009s 00:07:41.247 15:25:54 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.247 15:25:54 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:41.247 ************************************ 00:07:41.247 END TEST accel_wrong_workload 00:07:41.247 ************************************ 00:07:41.247 Error: writing output failed: Broken pipe 00:07:41.247 15:25:54 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:41.247 15:25:54 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:41.247 15:25:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.247 15:25:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.505 ************************************ 00:07:41.505 START TEST accel_negative_buffers 00:07:41.505 ************************************ 00:07:41.505 15:25:54 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:41.505 15:25:54 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:41.505 15:25:54 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:41.505 15:25:54 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:41.505 15:25:54 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:41.505 15:25:54 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:41.505 15:25:54 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:41.505 15:25:54 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:41.505 15:25:54 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:41.505 15:25:54 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:41.505 15:25:54 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.505 15:25:54 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.505 15:25:54 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.505 15:25:54 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.505 15:25:54 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.505 15:25:54 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:41.505 15:25:54 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:41.505 -x option must be non-negative. 00:07:41.505 [2024-05-15 15:25:54.374021] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:41.505 accel_perf options: 00:07:41.505 [-h help message] 00:07:41.505 [-q queue depth per core] 00:07:41.505 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:41.505 [-T number of threads per core 00:07:41.505 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:41.505 [-t time in seconds] 00:07:41.505 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:41.505 [ dif_verify, , dif_generate, dif_generate_copy 00:07:41.505 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:41.505 [-l for compress/decompress workloads, name of uncompressed input file 00:07:41.505 [-S for crc32c workload, use this seed value (default 0) 00:07:41.505 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:41.505 [-f for fill workload, use this BYTE value (default 255) 00:07:41.505 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:41.505 [-y verify result if this switch is on] 00:07:41.505 [-a tasks to allocate per core (default: same value as -q)] 00:07:41.505 Can be used to spread operations across a wider range of memory. 00:07:41.505 15:25:54 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:41.505 15:25:54 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:41.505 15:25:54 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:41.505 15:25:54 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:41.505 00:07:41.505 real 0m0.024s 00:07:41.505 user 0m0.011s 00:07:41.505 sys 0m0.013s 00:07:41.505 15:25:54 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.505 15:25:54 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:41.505 ************************************ 00:07:41.505 END TEST accel_negative_buffers 00:07:41.505 ************************************ 00:07:41.505 Error: writing output failed: Broken pipe 00:07:41.505 15:25:54 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:41.505 15:25:54 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:41.505 15:25:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.505 15:25:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.505 ************************************ 00:07:41.505 START TEST accel_crc32c 00:07:41.505 ************************************ 00:07:41.505 15:25:54 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:41.505 15:25:54 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:41.505 15:25:54 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:41.505 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.505 15:25:54 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:41.505 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.505 15:25:54 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:41.505 15:25:54 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:41.505 15:25:54 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.505 15:25:54 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.505 15:25:54 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.506 15:25:54 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.506 15:25:54 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.506 15:25:54 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:41.506 15:25:54 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:41.506 [2024-05-15 15:25:54.439100] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:41.506 [2024-05-15 15:25:54.439166] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186474 ] 00:07:41.506 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.506 [2024-05-15 15:25:54.474200] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:41.506 [2024-05-15 15:25:54.511409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.506 [2024-05-15 15:25:54.599524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.764 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.765 15:25:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.765 15:25:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.765 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.765 15:25:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:43.136 15:25:55 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.136 00:07:43.136 real 0m1.397s 00:07:43.136 user 0m1.254s 00:07:43.136 sys 0m0.145s 00:07:43.136 15:25:55 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.136 15:25:55 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:43.136 ************************************ 00:07:43.136 END TEST accel_crc32c 00:07:43.136 ************************************ 00:07:43.136 15:25:55 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:43.136 15:25:55 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:43.136 15:25:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.136 15:25:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.136 ************************************ 00:07:43.136 START TEST accel_crc32c_C2 00:07:43.136 ************************************ 00:07:43.136 15:25:55 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:43.136 15:25:55 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.136 15:25:55 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:43.136 15:25:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.136 15:25:55 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:43.136 15:25:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.136 15:25:55 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:43.136 15:25:55 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.136 15:25:55 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.136 15:25:55 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.136 15:25:55 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.136 15:25:55 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.136 15:25:55 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.136 15:25:55 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:43.136 15:25:55 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:43.136 [2024-05-15 15:25:55.888565] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:43.136 [2024-05-15 15:25:55.888629] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186678 ] 00:07:43.136 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.136 [2024-05-15 15:25:55.926092] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:43.136 [2024-05-15 15:25:55.961166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.136 [2024-05-15 15:25:56.051481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.137 15:25:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.508 00:07:44.508 real 0m1.415s 00:07:44.508 user 0m1.260s 00:07:44.508 sys 0m0.156s 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:44.508 15:25:57 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:44.508 ************************************ 00:07:44.508 END TEST accel_crc32c_C2 00:07:44.508 ************************************ 00:07:44.508 15:25:57 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:44.508 15:25:57 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:44.508 15:25:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:44.508 15:25:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.508 ************************************ 00:07:44.508 START TEST accel_copy 00:07:44.508 ************************************ 00:07:44.508 15:25:57 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:44.508 [2024-05-15 15:25:57.361420] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:44.508 [2024-05-15 15:25:57.361482] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186837 ] 00:07:44.508 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.508 [2024-05-15 15:25:57.397479] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:44.508 [2024-05-15 15:25:57.434499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.508 [2024-05-15 15:25:57.522643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.508 15:25:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:44.509 15:25:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.509 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.509 15:25:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:45.924 15:25:58 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.924 00:07:45.924 real 0m1.416s 00:07:45.924 user 0m1.259s 00:07:45.924 sys 0m0.157s 00:07:45.924 15:25:58 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:45.924 15:25:58 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:45.924 ************************************ 00:07:45.924 END TEST accel_copy 00:07:45.924 ************************************ 00:07:45.924 15:25:58 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:45.924 15:25:58 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:45.924 15:25:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:45.924 15:25:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.924 ************************************ 00:07:45.924 START TEST accel_fill 00:07:45.924 ************************************ 00:07:45.924 15:25:58 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:45.924 15:25:58 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:45.924 15:25:58 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:45.924 15:25:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:45.924 15:25:58 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:45.924 15:25:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:45.924 15:25:58 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:45.924 15:25:58 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:45.924 15:25:58 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.924 15:25:58 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.924 15:25:58 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.924 15:25:58 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.924 15:25:58 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.924 15:25:58 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:45.924 15:25:58 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:45.924 [2024-05-15 15:25:58.825394] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:45.924 [2024-05-15 15:25:58.825452] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186992 ] 00:07:45.924 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.924 [2024-05-15 15:25:58.862245] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:45.924 [2024-05-15 15:25:58.896959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.924 [2024-05-15 15:25:58.990365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.182 15:25:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.183 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.183 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.183 15:25:59 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:46.183 15:25:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.183 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.183 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.183 15:25:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:46.183 15:25:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.183 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.183 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.183 15:25:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:46.183 15:25:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.183 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.183 15:25:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:47.116 15:26:00 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.116 00:07:47.116 real 0m1.405s 00:07:47.116 user 0m1.258s 00:07:47.116 sys 0m0.149s 00:07:47.116 15:26:00 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.116 15:26:00 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:47.116 ************************************ 00:07:47.116 END TEST accel_fill 00:07:47.116 ************************************ 00:07:47.374 15:26:00 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:47.374 15:26:00 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:47.374 15:26:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.374 15:26:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.374 ************************************ 00:07:47.374 START TEST accel_copy_crc32c 00:07:47.374 ************************************ 00:07:47.374 15:26:00 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:07:47.374 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:47.374 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:47.374 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.374 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:47.374 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.374 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:47.374 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:47.374 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.374 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.374 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.374 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.374 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.374 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:47.374 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:47.374 [2024-05-15 15:26:00.278053] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:47.374 [2024-05-15 15:26:00.278112] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1187273 ] 00:07:47.374 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.374 [2024-05-15 15:26:00.315305] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:47.374 [2024-05-15 15:26:00.350257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.374 [2024-05-15 15:26:00.440475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.632 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.633 15:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.003 00:07:49.003 real 0m1.414s 00:07:49.003 user 0m1.263s 00:07:49.003 sys 0m0.155s 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:49.003 15:26:01 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:49.003 ************************************ 00:07:49.003 END TEST accel_copy_crc32c 00:07:49.003 ************************************ 00:07:49.003 15:26:01 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:49.003 15:26:01 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:49.003 15:26:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:49.003 15:26:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.003 ************************************ 00:07:49.003 START TEST accel_copy_crc32c_C2 00:07:49.003 ************************************ 00:07:49.003 15:26:01 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:49.003 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:49.003 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:49.003 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.003 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:49.003 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.003 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:49.003 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:49.004 [2024-05-15 15:26:01.746051] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:49.004 [2024-05-15 15:26:01.746112] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1187425 ] 00:07:49.004 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.004 [2024-05-15 15:26:01.781732] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:49.004 [2024-05-15 15:26:01.818651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.004 [2024-05-15 15:26:01.909040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.004 15:26:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.376 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.376 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.376 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.376 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.376 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.376 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.376 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.376 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.376 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.377 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.377 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.377 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.377 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.377 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.377 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.377 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.377 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.377 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.377 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.377 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.377 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.377 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.377 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.377 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.377 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:50.377 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:50.377 15:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.377 00:07:50.377 real 0m1.416s 00:07:50.377 user 0m1.260s 00:07:50.377 sys 0m0.159s 00:07:50.377 15:26:03 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.377 15:26:03 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:50.377 ************************************ 00:07:50.377 END TEST accel_copy_crc32c_C2 00:07:50.377 ************************************ 00:07:50.377 15:26:03 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:50.377 15:26:03 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:50.377 15:26:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.377 15:26:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.377 ************************************ 00:07:50.377 START TEST accel_dualcast 00:07:50.377 ************************************ 00:07:50.377 15:26:03 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:50.377 [2024-05-15 15:26:03.214645] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:50.377 [2024-05-15 15:26:03.214707] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1187584 ] 00:07:50.377 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.377 [2024-05-15 15:26:03.252005] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:50.377 [2024-05-15 15:26:03.286768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.377 [2024-05-15 15:26:03.377560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:50.377 15:26:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.378 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.378 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.378 15:26:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:50.378 15:26:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.378 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.378 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.378 15:26:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:50.378 15:26:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.378 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.378 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.378 15:26:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:50.378 15:26:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.378 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.378 15:26:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:51.748 15:26:04 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.748 00:07:51.748 real 0m1.404s 00:07:51.748 user 0m1.261s 00:07:51.748 sys 0m0.146s 00:07:51.748 15:26:04 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:51.748 15:26:04 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:51.748 ************************************ 00:07:51.748 END TEST accel_dualcast 00:07:51.748 ************************************ 00:07:51.748 15:26:04 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:51.748 15:26:04 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:51.748 15:26:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:51.748 15:26:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:51.748 ************************************ 00:07:51.748 START TEST accel_compare 00:07:51.748 ************************************ 00:07:51.748 15:26:04 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:51.748 15:26:04 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:51.748 15:26:04 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:51.748 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:51.748 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:51.748 15:26:04 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:51.748 15:26:04 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:51.748 15:26:04 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:51.748 15:26:04 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.748 15:26:04 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.748 15:26:04 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.748 15:26:04 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.748 15:26:04 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.749 15:26:04 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:51.749 15:26:04 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:51.749 [2024-05-15 15:26:04.670714] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:51.749 [2024-05-15 15:26:04.670777] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1187846 ] 00:07:51.749 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.749 [2024-05-15 15:26:04.706665] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:51.749 [2024-05-15 15:26:04.743757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.749 [2024-05-15 15:26:04.835972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.006 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.007 15:26:04 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:52.007 15:26:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.007 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.007 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.007 15:26:04 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:52.007 15:26:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.007 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.007 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.007 15:26:04 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.007 15:26:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.007 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.007 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.007 15:26:04 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:52.007 15:26:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.007 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.007 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.007 15:26:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.007 15:26:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.007 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.007 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.007 15:26:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.007 15:26:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.007 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.007 15:26:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:53.379 15:26:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.379 00:07:53.379 real 0m1.402s 00:07:53.379 user 0m1.246s 00:07:53.379 sys 0m0.158s 00:07:53.379 15:26:06 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:53.379 15:26:06 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:53.379 ************************************ 00:07:53.379 END TEST accel_compare 00:07:53.379 ************************************ 00:07:53.379 15:26:06 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:53.379 15:26:06 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:53.379 15:26:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:53.379 15:26:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:53.379 ************************************ 00:07:53.379 START TEST accel_xor 00:07:53.379 ************************************ 00:07:53.379 15:26:06 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:53.379 15:26:06 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:53.380 [2024-05-15 15:26:06.124558] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:53.380 [2024-05-15 15:26:06.124623] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1188013 ] 00:07:53.380 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.380 [2024-05-15 15:26:06.162013] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:53.380 [2024-05-15 15:26:06.196547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.380 [2024-05-15 15:26:06.287118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.380 15:26:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.748 15:26:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.748 15:26:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.748 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.748 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.748 15:26:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.748 15:26:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.748 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.748 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.748 15:26:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.748 15:26:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.748 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.748 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.748 15:26:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.748 15:26:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.748 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.749 00:07:54.749 real 0m1.416s 00:07:54.749 user 0m1.260s 00:07:54.749 sys 0m0.158s 00:07:54.749 15:26:07 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:54.749 15:26:07 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:54.749 ************************************ 00:07:54.749 END TEST accel_xor 00:07:54.749 ************************************ 00:07:54.749 15:26:07 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:54.749 15:26:07 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:54.749 15:26:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:54.749 15:26:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:54.749 ************************************ 00:07:54.749 START TEST accel_xor 00:07:54.749 ************************************ 00:07:54.749 15:26:07 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:54.749 [2024-05-15 15:26:07.602244] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:54.749 [2024-05-15 15:26:07.602312] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1188173 ] 00:07:54.749 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.749 [2024-05-15 15:26:07.638295] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:54.749 [2024-05-15 15:26:07.675146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.749 [2024-05-15 15:26:07.764736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.749 15:26:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.120 15:26:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.120 15:26:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:56.120 15:26:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.120 00:07:56.120 real 0m1.420s 00:07:56.120 user 0m1.265s 00:07:56.120 sys 0m0.159s 00:07:56.120 15:26:09 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:56.120 15:26:09 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:56.120 ************************************ 00:07:56.120 END TEST accel_xor 00:07:56.120 ************************************ 00:07:56.120 15:26:09 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:56.120 15:26:09 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:56.120 15:26:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:56.120 15:26:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:56.120 ************************************ 00:07:56.120 START TEST accel_dif_verify 00:07:56.120 ************************************ 00:07:56.120 15:26:09 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:56.120 15:26:09 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:56.120 15:26:09 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:56.120 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.120 15:26:09 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:56.120 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.120 15:26:09 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:56.120 15:26:09 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:56.120 15:26:09 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:56.120 15:26:09 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:56.120 15:26:09 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.120 15:26:09 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.120 15:26:09 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:56.120 15:26:09 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:56.120 15:26:09 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:56.120 [2024-05-15 15:26:09.070850] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:56.120 [2024-05-15 15:26:09.070914] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1188326 ] 00:07:56.120 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.120 [2024-05-15 15:26:09.107997] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:56.120 [2024-05-15 15:26:09.139589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.379 [2024-05-15 15:26:09.232463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.379 15:26:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:57.752 15:26:10 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.752 00:07:57.752 real 0m1.406s 00:07:57.752 user 0m1.254s 00:07:57.752 sys 0m0.156s 00:07:57.752 15:26:10 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:57.752 15:26:10 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:57.752 ************************************ 00:07:57.752 END TEST accel_dif_verify 00:07:57.752 ************************************ 00:07:57.752 15:26:10 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:57.752 15:26:10 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:57.752 15:26:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:57.752 15:26:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:57.752 ************************************ 00:07:57.752 START TEST accel_dif_generate 00:07:57.752 ************************************ 00:07:57.752 15:26:10 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:57.752 [2024-05-15 15:26:10.528191] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:57.752 [2024-05-15 15:26:10.528279] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1188603 ] 00:07:57.752 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.752 [2024-05-15 15:26:10.564588] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:57.752 [2024-05-15 15:26:10.599240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.752 [2024-05-15 15:26:10.686799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.752 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.753 15:26:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:59.126 15:26:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.126 00:07:59.126 real 0m1.410s 00:07:59.126 user 0m1.269s 00:07:59.126 sys 0m0.145s 00:07:59.126 15:26:11 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.126 15:26:11 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:59.126 ************************************ 00:07:59.126 END TEST accel_dif_generate 00:07:59.126 ************************************ 00:07:59.126 15:26:11 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:59.126 15:26:11 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:59.126 15:26:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.126 15:26:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:59.126 ************************************ 00:07:59.126 START TEST accel_dif_generate_copy 00:07:59.126 ************************************ 00:07:59.126 15:26:11 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:59.126 15:26:11 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:59.126 15:26:11 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:59.126 15:26:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.126 15:26:11 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:59.126 15:26:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.126 15:26:11 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:59.126 15:26:11 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:59.126 15:26:11 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:59.126 15:26:11 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:59.126 15:26:11 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.126 15:26:11 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.126 15:26:11 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:59.126 15:26:11 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:59.126 15:26:11 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:59.127 [2024-05-15 15:26:11.985892] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:07:59.127 [2024-05-15 15:26:11.985955] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1188761 ] 00:07:59.127 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.127 [2024-05-15 15:26:12.022587] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:59.127 [2024-05-15 15:26:12.057354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.127 [2024-05-15 15:26:12.147455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.127 15:26:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:00.499 00:08:00.499 real 0m1.417s 00:08:00.499 user 0m1.268s 00:08:00.499 sys 0m0.151s 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:00.499 15:26:13 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:00.499 ************************************ 00:08:00.499 END TEST accel_dif_generate_copy 00:08:00.499 ************************************ 00:08:00.499 15:26:13 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:00.499 15:26:13 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:00.499 15:26:13 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:08:00.499 15:26:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:00.499 15:26:13 accel -- common/autotest_common.sh@10 -- # set +x 00:08:00.499 ************************************ 00:08:00.499 START TEST accel_comp 00:08:00.499 ************************************ 00:08:00.499 15:26:13 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:00.499 15:26:13 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:00.499 15:26:13 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:00.499 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.499 15:26:13 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:00.499 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.499 15:26:13 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:00.499 15:26:13 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:00.499 15:26:13 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.499 15:26:13 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.499 15:26:13 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.499 15:26:13 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.499 15:26:13 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.499 15:26:13 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:00.499 15:26:13 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:00.499 [2024-05-15 15:26:13.461495] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:08:00.499 [2024-05-15 15:26:13.461557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1188919 ] 00:08:00.499 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.499 [2024-05-15 15:26:13.497540] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:00.499 [2024-05-15 15:26:13.534627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.756 [2024-05-15 15:26:13.624180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.756 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.757 15:26:13 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:00.757 15:26:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.757 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.757 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.757 15:26:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.757 15:26:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.757 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.757 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.757 15:26:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.757 15:26:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.757 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.757 15:26:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:02.129 15:26:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:02.129 00:08:02.129 real 0m1.416s 00:08:02.129 user 0m1.273s 00:08:02.129 sys 0m0.147s 00:08:02.129 15:26:14 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:02.129 15:26:14 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:02.129 ************************************ 00:08:02.129 END TEST accel_comp 00:08:02.129 ************************************ 00:08:02.129 15:26:14 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:02.129 15:26:14 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:02.129 15:26:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:02.129 15:26:14 accel -- common/autotest_common.sh@10 -- # set +x 00:08:02.129 ************************************ 00:08:02.129 START TEST accel_decomp 00:08:02.129 ************************************ 00:08:02.129 15:26:14 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:02.129 15:26:14 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:02.129 15:26:14 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:02.129 15:26:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.129 15:26:14 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:02.129 15:26:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.129 15:26:14 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:02.129 15:26:14 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:02.129 15:26:14 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:02.129 15:26:14 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:02.129 15:26:14 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.129 15:26:14 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.129 15:26:14 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:02.129 15:26:14 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:02.129 15:26:14 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:02.129 [2024-05-15 15:26:14.922364] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:08:02.129 [2024-05-15 15:26:14.922422] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189189 ] 00:08:02.129 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.129 [2024-05-15 15:26:14.957710] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:02.129 [2024-05-15 15:26:14.992685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.129 [2024-05-15 15:26:15.082909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:02.129 15:26:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.130 15:26:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:03.536 15:26:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:03.536 15:26:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.536 15:26:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:03.536 15:26:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:03.536 15:26:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:03.536 15:26:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.536 15:26:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:03.536 15:26:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:03.536 15:26:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:03.536 15:26:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.536 15:26:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:03.536 15:26:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:03.536 15:26:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:03.536 15:26:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.536 15:26:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:03.536 15:26:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:03.536 15:26:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:03.536 15:26:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.536 15:26:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:03.537 15:26:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:03.537 15:26:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:03.537 15:26:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.537 15:26:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:03.537 15:26:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:03.537 15:26:16 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:03.537 15:26:16 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:03.537 15:26:16 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:03.537 00:08:03.537 real 0m1.414s 00:08:03.537 user 0m1.260s 00:08:03.537 sys 0m0.157s 00:08:03.537 15:26:16 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:03.537 15:26:16 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:03.537 ************************************ 00:08:03.537 END TEST accel_decomp 00:08:03.537 ************************************ 00:08:03.537 15:26:16 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:03.537 15:26:16 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:03.537 15:26:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:03.537 15:26:16 accel -- common/autotest_common.sh@10 -- # set +x 00:08:03.537 ************************************ 00:08:03.537 START TEST accel_decmop_full 00:08:03.537 ************************************ 00:08:03.537 15:26:16 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:08:03.537 [2024-05-15 15:26:16.384731] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:08:03.537 [2024-05-15 15:26:16.384794] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189355 ] 00:08:03.537 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.537 [2024-05-15 15:26:16.421533] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:03.537 [2024-05-15 15:26:16.456011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.537 [2024-05-15 15:26:16.547206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.537 15:26:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:04.910 15:26:17 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:04.910 00:08:04.910 real 0m1.435s 00:08:04.910 user 0m1.284s 00:08:04.910 sys 0m0.154s 00:08:04.910 15:26:17 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:04.910 15:26:17 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:08:04.910 ************************************ 00:08:04.910 END TEST accel_decmop_full 00:08:04.910 ************************************ 00:08:04.910 15:26:17 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:04.910 15:26:17 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:04.910 15:26:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.910 15:26:17 accel -- common/autotest_common.sh@10 -- # set +x 00:08:04.910 ************************************ 00:08:04.910 START TEST accel_decomp_mcore 00:08:04.910 ************************************ 00:08:04.910 15:26:17 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:04.910 15:26:17 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:04.910 15:26:17 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:04.910 15:26:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:04.910 15:26:17 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:04.910 15:26:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:04.910 15:26:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:04.910 15:26:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:04.910 15:26:17 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.910 15:26:17 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:04.910 15:26:17 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.910 15:26:17 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.910 15:26:17 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.910 15:26:17 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:04.910 15:26:17 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:04.910 [2024-05-15 15:26:17.869030] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:08:04.910 [2024-05-15 15:26:17.869096] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189509 ] 00:08:04.910 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.910 [2024-05-15 15:26:17.906106] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:04.910 [2024-05-15 15:26:17.940905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:05.168 [2024-05-15 15:26:18.035246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.168 [2024-05-15 15:26:18.035297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.168 [2024-05-15 15:26:18.035413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.168 [2024-05-15 15:26:18.035415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.168 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.169 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.169 15:26:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.543 00:08:06.543 real 0m1.426s 00:08:06.543 user 0m4.727s 00:08:06.543 sys 0m0.159s 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:06.543 15:26:19 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:06.543 ************************************ 00:08:06.543 END TEST accel_decomp_mcore 00:08:06.543 ************************************ 00:08:06.543 15:26:19 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:06.543 15:26:19 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:06.543 15:26:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:06.543 15:26:19 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.543 ************************************ 00:08:06.543 START TEST accel_decomp_full_mcore 00:08:06.543 ************************************ 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:06.543 [2024-05-15 15:26:19.350204] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:08:06.543 [2024-05-15 15:26:19.350300] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189696 ] 00:08:06.543 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.543 [2024-05-15 15:26:19.387110] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:06.543 [2024-05-15 15:26:19.422071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.543 [2024-05-15 15:26:19.514321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.543 [2024-05-15 15:26:19.514376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.543 [2024-05-15 15:26:19.514491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.543 [2024-05-15 15:26:19.514494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.543 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.544 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.544 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:06.544 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.544 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.544 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.544 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.544 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.544 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.544 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.544 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.544 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.544 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.544 15:26:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:07.917 00:08:07.917 real 0m1.428s 00:08:07.917 user 0m4.739s 00:08:07.917 sys 0m0.164s 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:07.917 15:26:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:07.917 ************************************ 00:08:07.917 END TEST accel_decomp_full_mcore 00:08:07.917 ************************************ 00:08:07.917 15:26:20 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:07.917 15:26:20 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:07.917 15:26:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:07.917 15:26:20 accel -- common/autotest_common.sh@10 -- # set +x 00:08:07.917 ************************************ 00:08:07.917 START TEST accel_decomp_mthread 00:08:07.917 ************************************ 00:08:07.917 15:26:20 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:07.917 15:26:20 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:07.917 15:26:20 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:07.917 15:26:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.917 15:26:20 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:07.917 15:26:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.917 15:26:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:07.917 15:26:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:07.917 15:26:20 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:07.917 15:26:20 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:07.917 15:26:20 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.917 15:26:20 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.917 15:26:20 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:07.917 15:26:20 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:07.917 15:26:20 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:07.917 [2024-05-15 15:26:20.829921] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:08:07.917 [2024-05-15 15:26:20.829974] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189945 ] 00:08:07.917 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.917 [2024-05-15 15:26:20.865296] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:07.917 [2024-05-15 15:26:20.900357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.917 [2024-05-15 15:26:20.987395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.174 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:08.174 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.174 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.174 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.174 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:08.174 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.174 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.175 15:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:09.546 00:08:09.546 real 0m1.412s 00:08:09.546 user 0m1.262s 00:08:09.546 sys 0m0.154s 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:09.546 15:26:22 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:09.546 ************************************ 00:08:09.546 END TEST accel_decomp_mthread 00:08:09.546 ************************************ 00:08:09.546 15:26:22 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:09.546 15:26:22 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:09.546 15:26:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.546 15:26:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.546 ************************************ 00:08:09.546 START TEST accel_decomp_full_mthread 00:08:09.546 ************************************ 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:09.547 [2024-05-15 15:26:22.296497] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:08:09.547 [2024-05-15 15:26:22.296569] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1190101 ] 00:08:09.547 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.547 [2024-05-15 15:26:22.332863] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:09.547 [2024-05-15 15:26:22.367484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.547 [2024-05-15 15:26:22.458293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.547 15:26:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:10.919 00:08:10.919 real 0m1.451s 00:08:10.919 user 0m1.297s 00:08:10.919 sys 0m0.158s 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:10.919 15:26:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:10.919 ************************************ 00:08:10.919 END TEST accel_decomp_full_mthread 00:08:10.919 ************************************ 00:08:10.919 15:26:23 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:10.919 15:26:23 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:10.919 15:26:23 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:10.919 15:26:23 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:10.919 15:26:23 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:10.919 15:26:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:10.919 15:26:23 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:10.919 15:26:23 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.919 15:26:23 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.919 15:26:23 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.919 15:26:23 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:10.919 15:26:23 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:10.919 15:26:23 accel -- accel/accel.sh@41 -- # jq -r . 00:08:10.919 ************************************ 00:08:10.919 START TEST accel_dif_functional_tests 00:08:10.919 ************************************ 00:08:10.919 15:26:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:10.919 [2024-05-15 15:26:23.817168] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:08:10.919 [2024-05-15 15:26:23.817234] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1190261 ] 00:08:10.919 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.919 [2024-05-15 15:26:23.851471] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:10.919 [2024-05-15 15:26:23.886951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:10.919 [2024-05-15 15:26:23.978526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.919 [2024-05-15 15:26:23.978591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.919 [2024-05-15 15:26:23.978594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.184 00:08:11.184 00:08:11.184 CUnit - A unit testing framework for C - Version 2.1-3 00:08:11.184 http://cunit.sourceforge.net/ 00:08:11.184 00:08:11.184 00:08:11.184 Suite: accel_dif 00:08:11.184 Test: verify: DIF generated, GUARD check ...passed 00:08:11.184 Test: verify: DIF generated, APPTAG check ...passed 00:08:11.184 Test: verify: DIF generated, REFTAG check ...passed 00:08:11.184 Test: verify: DIF not generated, GUARD check ...[2024-05-15 15:26:24.067110] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:11.184 [2024-05-15 15:26:24.067170] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:11.184 passed 00:08:11.184 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 15:26:24.067229] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:11.184 [2024-05-15 15:26:24.067258] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:11.184 passed 00:08:11.184 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 15:26:24.067291] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:11.184 [2024-05-15 15:26:24.067318] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:11.184 passed 00:08:11.184 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:11.184 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 15:26:24.067377] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:11.184 passed 00:08:11.184 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:11.184 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:11.184 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:11.184 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 15:26:24.067529] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:11.184 passed 00:08:11.184 Test: generate copy: DIF generated, GUARD check ...passed 00:08:11.184 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:11.184 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:11.184 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:11.184 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:11.184 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:11.184 Test: generate copy: iovecs-len validate ...[2024-05-15 15:26:24.067750] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:11.184 passed 00:08:11.184 Test: generate copy: buffer alignment validate ...passed 00:08:11.184 00:08:11.184 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.184 suites 1 1 n/a 0 0 00:08:11.184 tests 20 20 20 0 0 00:08:11.184 asserts 204 204 204 0 n/a 00:08:11.184 00:08:11.184 Elapsed time = 0.002 seconds 00:08:11.184 00:08:11.184 real 0m0.499s 00:08:11.184 user 0m0.751s 00:08:11.184 sys 0m0.189s 00:08:11.184 15:26:24 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:11.184 15:26:24 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:11.184 ************************************ 00:08:11.184 END TEST accel_dif_functional_tests 00:08:11.184 ************************************ 00:08:11.448 00:08:11.448 real 0m32.026s 00:08:11.448 user 0m35.086s 00:08:11.448 sys 0m4.860s 00:08:11.448 15:26:24 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:11.448 15:26:24 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.448 ************************************ 00:08:11.448 END TEST accel 00:08:11.448 ************************************ 00:08:11.448 15:26:24 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:11.448 15:26:24 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:11.448 15:26:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:11.448 15:26:24 -- common/autotest_common.sh@10 -- # set +x 00:08:11.448 ************************************ 00:08:11.448 START TEST accel_rpc 00:08:11.448 ************************************ 00:08:11.448 15:26:24 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:11.448 * Looking for test storage... 00:08:11.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:08:11.448 15:26:24 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:11.448 15:26:24 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1190452 00:08:11.448 15:26:24 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:11.448 15:26:24 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1190452 00:08:11.448 15:26:24 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 1190452 ']' 00:08:11.448 15:26:24 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.448 15:26:24 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:11.448 15:26:24 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.448 15:26:24 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:11.448 15:26:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.448 [2024-05-15 15:26:24.458763] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:08:11.448 [2024-05-15 15:26:24.458860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1190452 ] 00:08:11.448 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.448 [2024-05-15 15:26:24.494457] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:11.448 [2024-05-15 15:26:24.526135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.707 [2024-05-15 15:26:24.608432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.707 15:26:24 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:11.707 15:26:24 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:11.707 15:26:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:11.707 15:26:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:11.707 15:26:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:11.707 15:26:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:11.707 15:26:24 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:11.707 15:26:24 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:11.707 15:26:24 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:11.707 15:26:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.707 ************************************ 00:08:11.707 START TEST accel_assign_opcode 00:08:11.707 ************************************ 00:08:11.707 15:26:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:08:11.707 15:26:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:11.707 15:26:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.707 15:26:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:11.707 [2024-05-15 15:26:24.693074] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:11.707 15:26:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.707 15:26:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:11.707 15:26:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.707 15:26:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:11.707 [2024-05-15 15:26:24.701085] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:11.707 15:26:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.707 15:26:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:11.707 15:26:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.707 15:26:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:11.964 15:26:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.964 15:26:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:11.964 15:26:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.964 15:26:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:11.964 15:26:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:11.964 15:26:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:11.964 15:26:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.964 software 00:08:11.964 00:08:11.964 real 0m0.297s 00:08:11.964 user 0m0.037s 00:08:11.964 sys 0m0.010s 00:08:11.964 15:26:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:11.965 15:26:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:11.965 ************************************ 00:08:11.965 END TEST accel_assign_opcode 00:08:11.965 ************************************ 00:08:11.965 15:26:25 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1190452 00:08:11.965 15:26:25 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 1190452 ']' 00:08:11.965 15:26:25 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 1190452 00:08:11.965 15:26:25 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:08:11.965 15:26:25 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:11.965 15:26:25 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1190452 00:08:11.965 15:26:25 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:11.965 15:26:25 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:11.965 15:26:25 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1190452' 00:08:11.965 killing process with pid 1190452 00:08:11.965 15:26:25 accel_rpc -- common/autotest_common.sh@965 -- # kill 1190452 00:08:11.965 15:26:25 accel_rpc -- common/autotest_common.sh@970 -- # wait 1190452 00:08:12.530 00:08:12.530 real 0m1.066s 00:08:12.530 user 0m1.017s 00:08:12.530 sys 0m0.407s 00:08:12.530 15:26:25 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:12.530 15:26:25 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.530 ************************************ 00:08:12.530 END TEST accel_rpc 00:08:12.530 ************************************ 00:08:12.530 15:26:25 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:12.530 15:26:25 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:12.530 15:26:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:12.530 15:26:25 -- common/autotest_common.sh@10 -- # set +x 00:08:12.530 ************************************ 00:08:12.530 START TEST app_cmdline 00:08:12.530 ************************************ 00:08:12.530 15:26:25 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:12.530 * Looking for test storage... 00:08:12.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:12.530 15:26:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:12.530 15:26:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1190656 00:08:12.530 15:26:25 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:12.530 15:26:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1190656 00:08:12.530 15:26:25 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 1190656 ']' 00:08:12.530 15:26:25 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.530 15:26:25 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:12.530 15:26:25 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.530 15:26:25 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:12.530 15:26:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:12.530 [2024-05-15 15:26:25.580878] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:08:12.530 [2024-05-15 15:26:25.580975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1190656 ] 00:08:12.530 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.530 [2024-05-15 15:26:25.616864] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:12.789 [2024-05-15 15:26:25.655105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.789 [2024-05-15 15:26:25.744190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.047 15:26:26 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:13.047 15:26:26 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:08:13.047 15:26:26 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:13.304 { 00:08:13.304 "version": "SPDK v24.05-pre git sha1 253cca4fc", 00:08:13.304 "fields": { 00:08:13.304 "major": 24, 00:08:13.304 "minor": 5, 00:08:13.304 "patch": 0, 00:08:13.304 "suffix": "-pre", 00:08:13.304 "commit": "253cca4fc" 00:08:13.304 } 00:08:13.304 } 00:08:13.304 15:26:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:13.304 15:26:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:13.304 15:26:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:13.304 15:26:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:13.304 15:26:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:13.304 15:26:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:13.304 15:26:26 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.304 15:26:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:13.304 15:26:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:13.304 15:26:26 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.304 15:26:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:13.304 15:26:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:13.304 15:26:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:13.304 15:26:26 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:13.304 15:26:26 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:13.304 15:26:26 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:13.304 15:26:26 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.304 15:26:26 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:13.304 15:26:26 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.304 15:26:26 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:13.304 15:26:26 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.304 15:26:26 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:13.304 15:26:26 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:13.304 15:26:26 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:13.562 request: 00:08:13.562 { 00:08:13.562 "method": "env_dpdk_get_mem_stats", 00:08:13.562 "req_id": 1 00:08:13.562 } 00:08:13.562 Got JSON-RPC error response 00:08:13.562 response: 00:08:13.562 { 00:08:13.562 "code": -32601, 00:08:13.562 "message": "Method not found" 00:08:13.562 } 00:08:13.562 15:26:26 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:13.562 15:26:26 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:13.562 15:26:26 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:13.562 15:26:26 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:13.562 15:26:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1190656 00:08:13.562 15:26:26 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 1190656 ']' 00:08:13.562 15:26:26 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 1190656 00:08:13.562 15:26:26 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:08:13.562 15:26:26 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:13.562 15:26:26 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1190656 00:08:13.562 15:26:26 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:13.562 15:26:26 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:13.562 15:26:26 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1190656' 00:08:13.562 killing process with pid 1190656 00:08:13.562 15:26:26 app_cmdline -- common/autotest_common.sh@965 -- # kill 1190656 00:08:13.562 15:26:26 app_cmdline -- common/autotest_common.sh@970 -- # wait 1190656 00:08:14.128 00:08:14.128 real 0m1.510s 00:08:14.128 user 0m1.860s 00:08:14.128 sys 0m0.469s 00:08:14.128 15:26:26 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:14.128 15:26:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:14.128 ************************************ 00:08:14.128 END TEST app_cmdline 00:08:14.128 ************************************ 00:08:14.128 15:26:27 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:14.128 15:26:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:14.128 15:26:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:14.128 15:26:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.128 ************************************ 00:08:14.128 START TEST version 00:08:14.128 ************************************ 00:08:14.128 15:26:27 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:14.128 * Looking for test storage... 00:08:14.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:14.128 15:26:27 version -- app/version.sh@17 -- # get_header_version major 00:08:14.128 15:26:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:14.128 15:26:27 version -- app/version.sh@14 -- # cut -f2 00:08:14.128 15:26:27 version -- app/version.sh@14 -- # tr -d '"' 00:08:14.128 15:26:27 version -- app/version.sh@17 -- # major=24 00:08:14.128 15:26:27 version -- app/version.sh@18 -- # get_header_version minor 00:08:14.128 15:26:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:14.128 15:26:27 version -- app/version.sh@14 -- # cut -f2 00:08:14.128 15:26:27 version -- app/version.sh@14 -- # tr -d '"' 00:08:14.128 15:26:27 version -- app/version.sh@18 -- # minor=5 00:08:14.128 15:26:27 version -- app/version.sh@19 -- # get_header_version patch 00:08:14.128 15:26:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:14.128 15:26:27 version -- app/version.sh@14 -- # cut -f2 00:08:14.128 15:26:27 version -- app/version.sh@14 -- # tr -d '"' 00:08:14.128 15:26:27 version -- app/version.sh@19 -- # patch=0 00:08:14.128 15:26:27 version -- app/version.sh@20 -- # get_header_version suffix 00:08:14.128 15:26:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:14.128 15:26:27 version -- app/version.sh@14 -- # cut -f2 00:08:14.128 15:26:27 version -- app/version.sh@14 -- # tr -d '"' 00:08:14.128 15:26:27 version -- app/version.sh@20 -- # suffix=-pre 00:08:14.128 15:26:27 version -- app/version.sh@22 -- # version=24.5 00:08:14.128 15:26:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:14.128 15:26:27 version -- app/version.sh@28 -- # version=24.5rc0 00:08:14.128 15:26:27 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:14.128 15:26:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:14.128 15:26:27 version -- app/version.sh@30 -- # py_version=24.5rc0 00:08:14.128 15:26:27 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:08:14.128 00:08:14.128 real 0m0.107s 00:08:14.128 user 0m0.061s 00:08:14.128 sys 0m0.069s 00:08:14.128 15:26:27 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:14.128 15:26:27 version -- common/autotest_common.sh@10 -- # set +x 00:08:14.128 ************************************ 00:08:14.128 END TEST version 00:08:14.128 ************************************ 00:08:14.128 15:26:27 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:08:14.128 15:26:27 -- spdk/autotest.sh@194 -- # uname -s 00:08:14.128 15:26:27 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:14.128 15:26:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:14.128 15:26:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:14.128 15:26:27 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:14.128 15:26:27 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:08:14.128 15:26:27 -- spdk/autotest.sh@256 -- # timing_exit lib 00:08:14.128 15:26:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:14.128 15:26:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.128 15:26:27 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:08:14.128 15:26:27 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:08:14.128 15:26:27 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:08:14.128 15:26:27 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:08:14.128 15:26:27 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:08:14.128 15:26:27 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:08:14.128 15:26:27 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:14.128 15:26:27 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:14.128 15:26:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:14.128 15:26:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.128 ************************************ 00:08:14.128 START TEST nvmf_tcp 00:08:14.128 ************************************ 00:08:14.128 15:26:27 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:14.387 * Looking for test storage... 00:08:14.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:14.388 15:26:27 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.388 15:26:27 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.388 15:26:27 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.388 15:26:27 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.388 15:26:27 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.388 15:26:27 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.388 15:26:27 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:14.388 15:26:27 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:14.388 15:26:27 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:14.388 15:26:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:14.388 15:26:27 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:14.388 15:26:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:14.388 15:26:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:14.388 15:26:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:14.388 ************************************ 00:08:14.388 START TEST nvmf_example 00:08:14.388 ************************************ 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:14.388 * Looking for test storage... 00:08:14.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:14.388 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:14.389 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:14.389 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.389 15:26:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.389 15:26:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.389 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:14.389 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:14.389 15:26:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:08:14.389 15:26:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.918 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:16.919 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:16.919 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:16.919 Found net devices under 0000:09:00.0: cvl_0_0 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:16.919 Found net devices under 0000:09:00.1: cvl_0_1 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:16.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:08:16.919 00:08:16.919 --- 10.0.0.2 ping statistics --- 00:08:16.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.919 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:16.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:08:16.919 00:08:16.919 --- 10.0.0.1 ping statistics --- 00:08:16.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.919 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1192967 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1192967 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 1192967 ']' 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:16.919 15:26:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:16.919 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.852 15:26:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:17.852 15:26:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:08:17.852 15:26:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:17.852 15:26:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:17.852 15:26:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:17.852 15:26:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:17.852 15:26:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.852 15:26:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:17.852 15:26:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.852 15:26:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:17.852 15:26:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.852 15:26:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:18.110 15:26:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.110 15:26:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:18.110 15:26:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:18.110 15:26:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.110 15:26:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:18.110 15:26:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.110 15:26:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:18.110 15:26:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:18.110 15:26:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.110 15:26:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:18.110 15:26:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.110 15:26:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:18.110 15:26:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.110 15:26:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:18.110 15:26:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.110 15:26:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:18.110 15:26:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:18.110 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.119 Initializing NVMe Controllers 00:08:28.119 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:28.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:28.119 Initialization complete. Launching workers. 00:08:28.120 ======================================================== 00:08:28.120 Latency(us) 00:08:28.120 Device Information : IOPS MiB/s Average min max 00:08:28.120 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14936.13 58.34 4284.38 822.40 23508.80 00:08:28.120 ======================================================== 00:08:28.120 Total : 14936.13 58.34 4284.38 822.40 23508.80 00:08:28.120 00:08:28.377 15:26:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:28.377 15:26:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:28.377 15:26:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:28.377 15:26:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:28.377 15:26:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:28.377 15:26:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:28.377 15:26:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:28.377 15:26:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:28.377 rmmod nvme_tcp 00:08:28.377 rmmod nvme_fabrics 00:08:28.377 rmmod nvme_keyring 00:08:28.377 15:26:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:28.377 15:26:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:28.377 15:26:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:28.377 15:26:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1192967 ']' 00:08:28.377 15:26:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1192967 00:08:28.377 15:26:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 1192967 ']' 00:08:28.377 15:26:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 1192967 00:08:28.377 15:26:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:08:28.378 15:26:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:28.378 15:26:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1192967 00:08:28.378 15:26:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:08:28.378 15:26:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:08:28.378 15:26:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1192967' 00:08:28.378 killing process with pid 1192967 00:08:28.378 15:26:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 1192967 00:08:28.378 15:26:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 1192967 00:08:28.636 nvmf threads initialize successfully 00:08:28.636 bdev subsystem init successfully 00:08:28.636 created a nvmf target service 00:08:28.636 create targets's poll groups done 00:08:28.636 all subsystems of target started 00:08:28.636 nvmf target is running 00:08:28.636 all subsystems of target stopped 00:08:28.636 destroy targets's poll groups done 00:08:28.636 destroyed the nvmf target service 00:08:28.636 bdev subsystem finish successfully 00:08:28.636 nvmf threads destroy successfully 00:08:28.636 15:26:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:28.636 15:26:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:28.636 15:26:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:28.636 15:26:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:28.636 15:26:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:28.636 15:26:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.636 15:26:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.636 15:26:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.537 15:26:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:30.537 15:26:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:30.537 15:26:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:30.537 15:26:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:30.537 00:08:30.537 real 0m16.270s 00:08:30.537 user 0m42.225s 00:08:30.537 sys 0m4.705s 00:08:30.537 15:26:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:30.537 15:26:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:30.537 ************************************ 00:08:30.537 END TEST nvmf_example 00:08:30.537 ************************************ 00:08:30.537 15:26:43 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:30.537 15:26:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:30.537 15:26:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:30.537 15:26:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:30.798 ************************************ 00:08:30.798 START TEST nvmf_filesystem 00:08:30.798 ************************************ 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:30.798 * Looking for test storage... 00:08:30.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:30.798 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:30.799 #define SPDK_CONFIG_H 00:08:30.799 #define SPDK_CONFIG_APPS 1 00:08:30.799 #define SPDK_CONFIG_ARCH native 00:08:30.799 #undef SPDK_CONFIG_ASAN 00:08:30.799 #undef SPDK_CONFIG_AVAHI 00:08:30.799 #undef SPDK_CONFIG_CET 00:08:30.799 #define SPDK_CONFIG_COVERAGE 1 00:08:30.799 #define SPDK_CONFIG_CROSS_PREFIX 00:08:30.799 #undef SPDK_CONFIG_CRYPTO 00:08:30.799 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:30.799 #undef SPDK_CONFIG_CUSTOMOCF 00:08:30.799 #undef SPDK_CONFIG_DAOS 00:08:30.799 #define SPDK_CONFIG_DAOS_DIR 00:08:30.799 #define SPDK_CONFIG_DEBUG 1 00:08:30.799 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:30.799 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:30.799 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:30.799 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:30.799 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:30.799 #undef SPDK_CONFIG_DPDK_UADK 00:08:30.799 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:30.799 #define SPDK_CONFIG_EXAMPLES 1 00:08:30.799 #undef SPDK_CONFIG_FC 00:08:30.799 #define SPDK_CONFIG_FC_PATH 00:08:30.799 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:30.799 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:30.799 #undef SPDK_CONFIG_FUSE 00:08:30.799 #undef SPDK_CONFIG_FUZZER 00:08:30.799 #define SPDK_CONFIG_FUZZER_LIB 00:08:30.799 #undef SPDK_CONFIG_GOLANG 00:08:30.799 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:30.799 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:30.799 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:30.799 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:08:30.799 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:30.799 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:30.799 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:30.799 #define SPDK_CONFIG_IDXD 1 00:08:30.799 #undef SPDK_CONFIG_IDXD_KERNEL 00:08:30.799 #undef SPDK_CONFIG_IPSEC_MB 00:08:30.799 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:30.799 #define SPDK_CONFIG_ISAL 1 00:08:30.799 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:30.799 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:30.799 #define SPDK_CONFIG_LIBDIR 00:08:30.799 #undef SPDK_CONFIG_LTO 00:08:30.799 #define SPDK_CONFIG_MAX_LCORES 00:08:30.799 #define SPDK_CONFIG_NVME_CUSE 1 00:08:30.799 #undef SPDK_CONFIG_OCF 00:08:30.799 #define SPDK_CONFIG_OCF_PATH 00:08:30.799 #define SPDK_CONFIG_OPENSSL_PATH 00:08:30.799 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:30.799 #define SPDK_CONFIG_PGO_DIR 00:08:30.799 #undef SPDK_CONFIG_PGO_USE 00:08:30.799 #define SPDK_CONFIG_PREFIX /usr/local 00:08:30.799 #undef SPDK_CONFIG_RAID5F 00:08:30.799 #undef SPDK_CONFIG_RBD 00:08:30.799 #define SPDK_CONFIG_RDMA 1 00:08:30.799 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:30.799 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:30.799 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:30.799 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:30.799 #define SPDK_CONFIG_SHARED 1 00:08:30.799 #undef SPDK_CONFIG_SMA 00:08:30.799 #define SPDK_CONFIG_TESTS 1 00:08:30.799 #undef SPDK_CONFIG_TSAN 00:08:30.799 #define SPDK_CONFIG_UBLK 1 00:08:30.799 #define SPDK_CONFIG_UBSAN 1 00:08:30.799 #undef SPDK_CONFIG_UNIT_TESTS 00:08:30.799 #undef SPDK_CONFIG_URING 00:08:30.799 #define SPDK_CONFIG_URING_PATH 00:08:30.799 #undef SPDK_CONFIG_URING_ZNS 00:08:30.799 #undef SPDK_CONFIG_USDT 00:08:30.799 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:30.799 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:30.799 #define SPDK_CONFIG_VFIO_USER 1 00:08:30.799 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:30.799 #define SPDK_CONFIG_VHOST 1 00:08:30.799 #define SPDK_CONFIG_VIRTIO 1 00:08:30.799 #undef SPDK_CONFIG_VTUNE 00:08:30.799 #define SPDK_CONFIG_VTUNE_DIR 00:08:30.799 #define SPDK_CONFIG_WERROR 1 00:08:30.799 #define SPDK_CONFIG_WPDK_DIR 00:08:30.799 #undef SPDK_CONFIG_XNVME 00:08:30.799 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.799 15:26:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : main 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:08:30.800 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 1194680 ]] 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 1194680 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.l1xbD4 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:30.801 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.l1xbD4/tests/target /tmp/spdk.l1xbD4 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=964968448 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4319461376 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=47022936064 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994729472 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=14971793408 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30992654336 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997364736 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12389961728 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398948352 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8986624 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30996549632 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997364736 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=815104 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199468032 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199472128 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:08:30.802 * Looking for test storage... 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=47022936064 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=17186385920 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.802 15:26:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.803 15:26:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.803 15:26:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.803 15:26:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.803 15:26:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.803 15:26:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:30.803 15:26:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.803 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:30.804 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:30.804 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:30.804 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.804 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.804 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.804 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:30.804 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:30.804 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:30.804 15:26:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:30.804 15:26:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:30.804 15:26:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:30.804 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:30.804 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.804 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:30.804 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:30.804 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:30.804 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.804 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.804 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.804 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:30.804 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:30.804 15:26:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:30.804 15:26:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:33.336 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:33.336 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:33.336 Found net devices under 0000:09:00.0: cvl_0_0 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:33.336 Found net devices under 0000:09:00.1: cvl_0_1 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:33.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:08:33.336 00:08:33.336 --- 10.0.0.2 ping statistics --- 00:08:33.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.336 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:33.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:08:33.336 00:08:33.336 --- 10.0.0.1 ping statistics --- 00:08:33.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.336 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:33.336 15:26:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:33.594 15:26:46 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:33.594 15:26:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:33.594 15:26:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:33.594 15:26:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:33.594 ************************************ 00:08:33.594 START TEST nvmf_filesystem_no_in_capsule 00:08:33.594 ************************************ 00:08:33.594 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:08:33.594 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:33.594 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:33.594 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:33.594 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:33.594 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.594 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1196597 00:08:33.594 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:33.594 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1196597 00:08:33.594 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 1196597 ']' 00:08:33.594 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.594 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:33.594 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.594 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:33.595 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.595 [2024-05-15 15:26:46.517289] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:08:33.595 [2024-05-15 15:26:46.517359] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.595 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.595 [2024-05-15 15:26:46.560382] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:33.595 [2024-05-15 15:26:46.592935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.595 [2024-05-15 15:26:46.679617] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.595 [2024-05-15 15:26:46.679673] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.595 [2024-05-15 15:26:46.679686] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.595 [2024-05-15 15:26:46.679697] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.595 [2024-05-15 15:26:46.679707] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.595 [2024-05-15 15:26:46.679795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.595 [2024-05-15 15:26:46.679863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.595 [2024-05-15 15:26:46.679927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.595 [2024-05-15 15:26:46.679925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.853 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:33.853 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:33.853 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:33.853 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:33.853 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.853 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.853 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:33.853 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:33.853 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.853 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.853 [2024-05-15 15:26:46.835047] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.853 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.853 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:33.853 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.853 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:34.111 Malloc1 00:08:34.111 15:26:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:34.111 [2024-05-15 15:26:47.020300] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:34.111 [2024-05-15 15:26:47.020658] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:34.111 { 00:08:34.111 "name": "Malloc1", 00:08:34.111 "aliases": [ 00:08:34.111 "89de076c-661e-48df-a213-f9297959bb73" 00:08:34.111 ], 00:08:34.111 "product_name": "Malloc disk", 00:08:34.111 "block_size": 512, 00:08:34.111 "num_blocks": 1048576, 00:08:34.111 "uuid": "89de076c-661e-48df-a213-f9297959bb73", 00:08:34.111 "assigned_rate_limits": { 00:08:34.111 "rw_ios_per_sec": 0, 00:08:34.111 "rw_mbytes_per_sec": 0, 00:08:34.111 "r_mbytes_per_sec": 0, 00:08:34.111 "w_mbytes_per_sec": 0 00:08:34.111 }, 00:08:34.111 "claimed": true, 00:08:34.111 "claim_type": "exclusive_write", 00:08:34.111 "zoned": false, 00:08:34.111 "supported_io_types": { 00:08:34.111 "read": true, 00:08:34.111 "write": true, 00:08:34.111 "unmap": true, 00:08:34.111 "write_zeroes": true, 00:08:34.111 "flush": true, 00:08:34.111 "reset": true, 00:08:34.111 "compare": false, 00:08:34.111 "compare_and_write": false, 00:08:34.111 "abort": true, 00:08:34.111 "nvme_admin": false, 00:08:34.111 "nvme_io": false 00:08:34.111 }, 00:08:34.111 "memory_domains": [ 00:08:34.111 { 00:08:34.111 "dma_device_id": "system", 00:08:34.111 "dma_device_type": 1 00:08:34.111 }, 00:08:34.111 { 00:08:34.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.111 "dma_device_type": 2 00:08:34.111 } 00:08:34.111 ], 00:08:34.111 "driver_specific": {} 00:08:34.111 } 00:08:34.111 ]' 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:34.111 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:34.677 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:34.677 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:34.677 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:34.677 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:34.677 15:26:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:37.204 15:26:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:37.204 15:26:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:37.204 15:26:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:37.204 15:26:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:37.204 15:26:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:37.204 15:26:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:37.204 15:26:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:37.204 15:26:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:37.204 15:26:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:37.204 15:26:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:37.204 15:26:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:37.204 15:26:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:37.204 15:26:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:37.204 15:26:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:37.204 15:26:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:37.204 15:26:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:37.204 15:26:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:37.204 15:26:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:37.204 15:26:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:38.136 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:38.136 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:38.136 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:38.136 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:38.136 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:38.136 ************************************ 00:08:38.136 START TEST filesystem_ext4 00:08:38.136 ************************************ 00:08:38.136 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:38.136 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:38.136 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:38.136 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:38.136 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:38.136 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:38.136 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:38.136 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:38.136 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:38.136 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:38.136 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:38.136 mke2fs 1.46.5 (30-Dec-2021) 00:08:38.394 Discarding device blocks: 0/522240 done 00:08:38.394 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:38.394 Filesystem UUID: 5cfc1d1a-3cd5-49eb-ab3a-f37e90a19a06 00:08:38.394 Superblock backups stored on blocks: 00:08:38.394 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:38.394 00:08:38.394 Allocating group tables: 0/64 done 00:08:38.394 Writing inode tables: 0/64 done 00:08:38.394 Creating journal (8192 blocks): done 00:08:38.394 Writing superblocks and filesystem accounting information: 0/64 done 00:08:38.394 00:08:38.394 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:38.394 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:38.651 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:38.909 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:38.909 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:38.909 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:38.909 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:38.909 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:38.909 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1196597 00:08:38.909 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:38.909 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:38.910 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:38.910 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:38.910 00:08:38.910 real 0m0.637s 00:08:38.910 user 0m0.017s 00:08:38.910 sys 0m0.029s 00:08:38.910 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:38.910 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:38.910 ************************************ 00:08:38.910 END TEST filesystem_ext4 00:08:38.910 ************************************ 00:08:38.910 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:38.910 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:38.910 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:38.910 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:38.910 ************************************ 00:08:38.910 START TEST filesystem_btrfs 00:08:38.910 ************************************ 00:08:38.910 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:38.910 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:38.910 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:38.910 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:38.910 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:38.910 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:38.910 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:38.910 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:38.910 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:38.910 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:38.910 15:26:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:39.167 btrfs-progs v6.6.2 00:08:39.167 See https://btrfs.readthedocs.io for more information. 00:08:39.167 00:08:39.167 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:39.167 NOTE: several default settings have changed in version 5.15, please make sure 00:08:39.167 this does not affect your deployments: 00:08:39.167 - DUP for metadata (-m dup) 00:08:39.167 - enabled no-holes (-O no-holes) 00:08:39.167 - enabled free-space-tree (-R free-space-tree) 00:08:39.167 00:08:39.167 Label: (null) 00:08:39.167 UUID: 374c93e6-586e-4728-847e-c40355ebbb13 00:08:39.167 Node size: 16384 00:08:39.167 Sector size: 4096 00:08:39.167 Filesystem size: 510.00MiB 00:08:39.167 Block group profiles: 00:08:39.167 Data: single 8.00MiB 00:08:39.167 Metadata: DUP 32.00MiB 00:08:39.167 System: DUP 8.00MiB 00:08:39.167 SSD detected: yes 00:08:39.167 Zoned device: no 00:08:39.167 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:39.167 Runtime features: free-space-tree 00:08:39.167 Checksum: crc32c 00:08:39.167 Number of devices: 1 00:08:39.167 Devices: 00:08:39.167 ID SIZE PATH 00:08:39.167 1 510.00MiB /dev/nvme0n1p1 00:08:39.167 00:08:39.167 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:39.167 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1196597 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:40.099 00:08:40.099 real 0m1.039s 00:08:40.099 user 0m0.015s 00:08:40.099 sys 0m0.040s 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:40.099 ************************************ 00:08:40.099 END TEST filesystem_btrfs 00:08:40.099 ************************************ 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:40.099 ************************************ 00:08:40.099 START TEST filesystem_xfs 00:08:40.099 ************************************ 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:40.099 15:26:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:40.099 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:40.099 = sectsz=512 attr=2, projid32bit=1 00:08:40.099 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:40.099 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:40.099 data = bsize=4096 blocks=130560, imaxpct=25 00:08:40.099 = sunit=0 swidth=0 blks 00:08:40.099 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:40.099 log =internal log bsize=4096 blocks=16384, version=2 00:08:40.099 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:40.099 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:41.031 Discarding blocks...Done. 00:08:41.031 15:26:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:41.031 15:26:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:42.927 15:26:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:42.927 15:26:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:42.927 15:26:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:42.927 15:26:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:42.927 15:26:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:42.927 15:26:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:42.927 15:26:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1196597 00:08:42.927 15:26:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:42.927 15:26:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:42.927 15:26:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:42.927 15:26:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:42.927 00:08:42.927 real 0m2.767s 00:08:42.927 user 0m0.017s 00:08:42.927 sys 0m0.035s 00:08:42.927 15:26:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:42.927 15:26:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:42.927 ************************************ 00:08:42.927 END TEST filesystem_xfs 00:08:42.927 ************************************ 00:08:42.927 15:26:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:42.927 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:42.927 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:43.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.185 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:43.185 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:43.185 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:43.185 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:43.185 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:43.185 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:43.185 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:43.185 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:43.185 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.185 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:43.185 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.185 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:43.185 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1196597 00:08:43.185 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 1196597 ']' 00:08:43.185 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 1196597 00:08:43.185 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:43.185 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:43.185 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1196597 00:08:43.185 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:43.185 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:43.185 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1196597' 00:08:43.185 killing process with pid 1196597 00:08:43.185 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 1196597 00:08:43.185 [2024-05-15 15:26:56.118421] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:43.185 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 1196597 00:08:43.807 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:43.807 00:08:43.807 real 0m10.099s 00:08:43.807 user 0m38.589s 00:08:43.807 sys 0m1.495s 00:08:43.807 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:43.807 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:43.807 ************************************ 00:08:43.807 END TEST nvmf_filesystem_no_in_capsule 00:08:43.807 ************************************ 00:08:43.807 15:26:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:43.807 15:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:43.807 15:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:43.807 15:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.807 ************************************ 00:08:43.807 START TEST nvmf_filesystem_in_capsule 00:08:43.807 ************************************ 00:08:43.807 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:08:43.807 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:43.807 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:43.807 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:43.807 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:43.807 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:43.807 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1198019 00:08:43.807 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:43.807 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1198019 00:08:43.807 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 1198019 ']' 00:08:43.807 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.807 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:43.807 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.807 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:43.807 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:43.807 [2024-05-15 15:26:56.669606] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:08:43.807 [2024-05-15 15:26:56.669684] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.807 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.807 [2024-05-15 15:26:56.712848] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:43.807 [2024-05-15 15:26:56.749019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.807 [2024-05-15 15:26:56.840712] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.807 [2024-05-15 15:26:56.840769] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.807 [2024-05-15 15:26:56.840785] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.807 [2024-05-15 15:26:56.840798] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.807 [2024-05-15 15:26:56.840810] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.807 [2024-05-15 15:26:56.840907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.807 [2024-05-15 15:26:56.840973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.807 [2024-05-15 15:26:56.843239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.807 [2024-05-15 15:26:56.843251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.064 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:44.064 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:44.064 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:44.064 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:44.064 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:44.064 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.064 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:44.065 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:44.065 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.065 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:44.065 [2024-05-15 15:26:56.989818] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.065 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.065 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:44.065 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.065 15:26:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:44.065 Malloc1 00:08:44.065 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.065 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:44.065 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.065 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:44.065 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.065 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:44.065 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.065 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:44.065 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.065 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.065 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.065 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:44.065 [2024-05-15 15:26:57.164108] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:44.065 [2024-05-15 15:26:57.164453] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.322 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.322 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:44.322 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:44.322 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:44.322 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:44.322 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:44.322 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:44.322 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.322 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:44.322 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.322 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:44.322 { 00:08:44.322 "name": "Malloc1", 00:08:44.322 "aliases": [ 00:08:44.322 "c4570ae4-e69c-4b7c-b908-574648978beb" 00:08:44.322 ], 00:08:44.322 "product_name": "Malloc disk", 00:08:44.322 "block_size": 512, 00:08:44.322 "num_blocks": 1048576, 00:08:44.322 "uuid": "c4570ae4-e69c-4b7c-b908-574648978beb", 00:08:44.322 "assigned_rate_limits": { 00:08:44.322 "rw_ios_per_sec": 0, 00:08:44.322 "rw_mbytes_per_sec": 0, 00:08:44.322 "r_mbytes_per_sec": 0, 00:08:44.322 "w_mbytes_per_sec": 0 00:08:44.322 }, 00:08:44.322 "claimed": true, 00:08:44.322 "claim_type": "exclusive_write", 00:08:44.322 "zoned": false, 00:08:44.322 "supported_io_types": { 00:08:44.322 "read": true, 00:08:44.322 "write": true, 00:08:44.322 "unmap": true, 00:08:44.322 "write_zeroes": true, 00:08:44.322 "flush": true, 00:08:44.322 "reset": true, 00:08:44.322 "compare": false, 00:08:44.322 "compare_and_write": false, 00:08:44.322 "abort": true, 00:08:44.322 "nvme_admin": false, 00:08:44.322 "nvme_io": false 00:08:44.322 }, 00:08:44.322 "memory_domains": [ 00:08:44.322 { 00:08:44.322 "dma_device_id": "system", 00:08:44.322 "dma_device_type": 1 00:08:44.322 }, 00:08:44.322 { 00:08:44.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.322 "dma_device_type": 2 00:08:44.322 } 00:08:44.322 ], 00:08:44.322 "driver_specific": {} 00:08:44.322 } 00:08:44.322 ]' 00:08:44.322 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:44.322 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:44.322 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:44.322 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:44.322 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:44.322 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:44.322 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:44.322 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:44.887 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:44.887 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:44.887 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:44.887 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:44.887 15:26:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:46.783 15:26:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:46.783 15:26:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:46.783 15:26:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:46.783 15:26:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:46.783 15:26:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:46.783 15:26:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:46.783 15:26:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:46.783 15:26:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:46.783 15:26:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:46.783 15:26:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:46.783 15:26:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:46.783 15:26:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:46.783 15:26:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:46.783 15:26:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:46.783 15:26:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:46.783 15:26:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:46.783 15:26:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:47.040 15:26:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:47.604 15:27:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:48.975 15:27:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:48.975 15:27:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:48.975 15:27:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:48.975 15:27:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:48.975 15:27:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.975 ************************************ 00:08:48.975 START TEST filesystem_in_capsule_ext4 00:08:48.975 ************************************ 00:08:48.975 15:27:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:48.975 15:27:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:48.975 15:27:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:48.975 15:27:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:48.975 15:27:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:48.975 15:27:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:48.975 15:27:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:48.975 15:27:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:48.975 15:27:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:48.975 15:27:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:48.975 15:27:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:48.975 mke2fs 1.46.5 (30-Dec-2021) 00:08:48.975 Discarding device blocks: 0/522240 done 00:08:48.975 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:48.975 Filesystem UUID: a772e61b-ee80-4c0b-8107-19e5b8b457a0 00:08:48.975 Superblock backups stored on blocks: 00:08:48.975 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:48.975 00:08:48.975 Allocating group tables: 0/64 done 00:08:48.975 Writing inode tables: 0/64 done 00:08:48.975 Creating journal (8192 blocks): done 00:08:48.975 Writing superblocks and filesystem accounting information: 0/64 done 00:08:48.975 00:08:48.975 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:48.975 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:49.233 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:49.233 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:49.233 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:49.233 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:49.233 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:49.233 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:49.233 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1198019 00:08:49.233 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:49.233 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:49.233 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:49.233 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:49.233 00:08:49.233 real 0m0.657s 00:08:49.233 user 0m0.017s 00:08:49.233 sys 0m0.027s 00:08:49.233 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:49.233 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:49.233 ************************************ 00:08:49.233 END TEST filesystem_in_capsule_ext4 00:08:49.233 ************************************ 00:08:49.491 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:49.491 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:49.491 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:49.491 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:49.491 ************************************ 00:08:49.491 START TEST filesystem_in_capsule_btrfs 00:08:49.491 ************************************ 00:08:49.491 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:49.491 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:49.491 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:49.491 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:49.491 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:49.491 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:49.491 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:49.491 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:49.491 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:49.491 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:49.491 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:49.749 btrfs-progs v6.6.2 00:08:49.749 See https://btrfs.readthedocs.io for more information. 00:08:49.749 00:08:49.749 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:49.749 NOTE: several default settings have changed in version 5.15, please make sure 00:08:49.749 this does not affect your deployments: 00:08:49.749 - DUP for metadata (-m dup) 00:08:49.749 - enabled no-holes (-O no-holes) 00:08:49.749 - enabled free-space-tree (-R free-space-tree) 00:08:49.749 00:08:49.749 Label: (null) 00:08:49.749 UUID: 4cbbb39b-3f34-4433-8610-0504aea38fe3 00:08:49.749 Node size: 16384 00:08:49.749 Sector size: 4096 00:08:49.749 Filesystem size: 510.00MiB 00:08:49.749 Block group profiles: 00:08:49.749 Data: single 8.00MiB 00:08:49.749 Metadata: DUP 32.00MiB 00:08:49.749 System: DUP 8.00MiB 00:08:49.749 SSD detected: yes 00:08:49.749 Zoned device: no 00:08:49.749 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:49.749 Runtime features: free-space-tree 00:08:49.749 Checksum: crc32c 00:08:49.749 Number of devices: 1 00:08:49.749 Devices: 00:08:49.749 ID SIZE PATH 00:08:49.750 1 510.00MiB /dev/nvme0n1p1 00:08:49.750 00:08:49.750 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:49.750 15:27:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1198019 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:50.682 00:08:50.682 real 0m1.224s 00:08:50.682 user 0m0.010s 00:08:50.682 sys 0m0.045s 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:50.682 ************************************ 00:08:50.682 END TEST filesystem_in_capsule_btrfs 00:08:50.682 ************************************ 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:50.682 ************************************ 00:08:50.682 START TEST filesystem_in_capsule_xfs 00:08:50.682 ************************************ 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:50.682 15:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:50.682 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:50.682 = sectsz=512 attr=2, projid32bit=1 00:08:50.682 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:50.683 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:50.683 data = bsize=4096 blocks=130560, imaxpct=25 00:08:50.683 = sunit=0 swidth=0 blks 00:08:50.683 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:50.683 log =internal log bsize=4096 blocks=16384, version=2 00:08:50.683 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:50.683 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:52.055 Discarding blocks...Done. 00:08:52.055 15:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:52.055 15:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:53.953 15:27:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:53.953 15:27:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:53.953 15:27:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:53.953 15:27:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:53.953 15:27:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:53.953 15:27:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:53.953 15:27:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1198019 00:08:53.953 15:27:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:53.953 15:27:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:53.953 15:27:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:53.953 15:27:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:53.953 00:08:53.953 real 0m3.170s 00:08:53.953 user 0m0.014s 00:08:53.953 sys 0m0.039s 00:08:53.953 15:27:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:53.953 15:27:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:53.953 ************************************ 00:08:53.953 END TEST filesystem_in_capsule_xfs 00:08:53.953 ************************************ 00:08:53.953 15:27:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:54.210 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:54.210 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:54.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.210 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:54.210 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:54.210 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:54.210 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:54.210 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:54.210 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:54.210 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:54.210 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:54.210 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.210 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:54.210 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.210 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:54.210 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1198019 00:08:54.210 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 1198019 ']' 00:08:54.210 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 1198019 00:08:54.211 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:54.211 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:54.211 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1198019 00:08:54.211 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:54.211 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:54.211 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1198019' 00:08:54.211 killing process with pid 1198019 00:08:54.211 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 1198019 00:08:54.211 [2024-05-15 15:27:07.251862] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:54.211 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 1198019 00:08:54.775 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:54.775 00:08:54.775 real 0m11.047s 00:08:54.775 user 0m42.310s 00:08:54.775 sys 0m1.548s 00:08:54.775 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:54.775 15:27:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:54.775 ************************************ 00:08:54.775 END TEST nvmf_filesystem_in_capsule 00:08:54.775 ************************************ 00:08:54.775 15:27:07 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:54.775 15:27:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:54.775 15:27:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:54.775 15:27:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:54.775 15:27:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:54.775 15:27:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:54.775 15:27:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:54.775 rmmod nvme_tcp 00:08:54.775 rmmod nvme_fabrics 00:08:54.775 rmmod nvme_keyring 00:08:54.775 15:27:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:54.775 15:27:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:54.775 15:27:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:54.775 15:27:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:54.775 15:27:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:54.775 15:27:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:54.775 15:27:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:54.775 15:27:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:54.775 15:27:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:54.775 15:27:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.775 15:27:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:54.775 15:27:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.307 15:27:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:57.307 00:08:57.307 real 0m26.150s 00:08:57.307 user 1m21.950s 00:08:57.307 sys 0m5.012s 00:08:57.307 15:27:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:57.307 15:27:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:57.307 ************************************ 00:08:57.307 END TEST nvmf_filesystem 00:08:57.307 ************************************ 00:08:57.307 15:27:09 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:57.307 15:27:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:57.307 15:27:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:57.307 15:27:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:57.307 ************************************ 00:08:57.307 START TEST nvmf_target_discovery 00:08:57.307 ************************************ 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:57.308 * Looking for test storage... 00:08:57.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:57.308 15:27:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:59.205 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:59.205 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:59.205 Found net devices under 0000:09:00.0: cvl_0_0 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:59.205 Found net devices under 0000:09:00.1: cvl_0_1 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.205 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:59.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:08:59.462 00:08:59.462 --- 10.0.0.2 ping statistics --- 00:08:59.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.462 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:59.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:08:59.462 00:08:59.462 --- 10.0.0.1 ping statistics --- 00:08:59.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.462 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1202396 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1202396 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 1202396 ']' 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:59.462 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.462 [2024-05-15 15:27:12.497711] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:08:59.462 [2024-05-15 15:27:12.497791] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.462 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.462 [2024-05-15 15:27:12.544110] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:59.719 [2024-05-15 15:27:12.575740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.719 [2024-05-15 15:27:12.661000] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.719 [2024-05-15 15:27:12.661070] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.719 [2024-05-15 15:27:12.661084] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.719 [2024-05-15 15:27:12.661095] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.719 [2024-05-15 15:27:12.661105] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.719 [2024-05-15 15:27:12.661184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.719 [2024-05-15 15:27:12.661229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.719 [2024-05-15 15:27:12.661325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.719 [2024-05-15 15:27:12.661328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.719 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:59.719 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:59.719 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:59.719 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:59.719 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.719 15:27:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.719 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:59.719 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.719 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.719 [2024-05-15 15:27:12.811782] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.978 Null1 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.978 [2024-05-15 15:27:12.851853] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:59.978 [2024-05-15 15:27:12.852154] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.978 Null2 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.978 Null3 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.978 Null4 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.978 15:27:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:08:59.978 00:08:59.978 Discovery Log Number of Records 6, Generation counter 6 00:08:59.978 =====Discovery Log Entry 0====== 00:08:59.978 trtype: tcp 00:08:59.978 adrfam: ipv4 00:08:59.978 subtype: current discovery subsystem 00:08:59.978 treq: not required 00:08:59.978 portid: 0 00:08:59.978 trsvcid: 4420 00:08:59.978 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:59.978 traddr: 10.0.0.2 00:08:59.978 eflags: explicit discovery connections, duplicate discovery information 00:08:59.978 sectype: none 00:08:59.978 =====Discovery Log Entry 1====== 00:08:59.978 trtype: tcp 00:08:59.978 adrfam: ipv4 00:08:59.978 subtype: nvme subsystem 00:08:59.978 treq: not required 00:08:59.978 portid: 0 00:08:59.978 trsvcid: 4420 00:08:59.978 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:59.978 traddr: 10.0.0.2 00:08:59.978 eflags: none 00:08:59.978 sectype: none 00:08:59.978 =====Discovery Log Entry 2====== 00:08:59.978 trtype: tcp 00:08:59.978 adrfam: ipv4 00:08:59.978 subtype: nvme subsystem 00:08:59.978 treq: not required 00:08:59.978 portid: 0 00:08:59.978 trsvcid: 4420 00:08:59.978 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:59.978 traddr: 10.0.0.2 00:08:59.978 eflags: none 00:08:59.978 sectype: none 00:08:59.978 =====Discovery Log Entry 3====== 00:08:59.978 trtype: tcp 00:08:59.978 adrfam: ipv4 00:08:59.978 subtype: nvme subsystem 00:08:59.978 treq: not required 00:08:59.978 portid: 0 00:08:59.978 trsvcid: 4420 00:08:59.978 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:59.978 traddr: 10.0.0.2 00:08:59.978 eflags: none 00:08:59.978 sectype: none 00:08:59.978 =====Discovery Log Entry 4====== 00:08:59.978 trtype: tcp 00:08:59.978 adrfam: ipv4 00:08:59.978 subtype: nvme subsystem 00:08:59.978 treq: not required 00:08:59.978 portid: 0 00:08:59.978 trsvcid: 4420 00:08:59.978 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:59.978 traddr: 10.0.0.2 00:08:59.978 eflags: none 00:08:59.978 sectype: none 00:08:59.978 =====Discovery Log Entry 5====== 00:08:59.978 trtype: tcp 00:08:59.978 adrfam: ipv4 00:08:59.978 subtype: discovery subsystem referral 00:08:59.978 treq: not required 00:08:59.978 portid: 0 00:08:59.978 trsvcid: 4430 00:08:59.978 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:59.978 traddr: 10.0.0.2 00:08:59.978 eflags: none 00:08:59.978 sectype: none 00:08:59.978 15:27:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:59.978 Perform nvmf subsystem discovery via RPC 00:08:59.978 15:27:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:59.978 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.978 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.978 [ 00:08:59.978 { 00:08:59.978 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:59.978 "subtype": "Discovery", 00:08:59.978 "listen_addresses": [ 00:08:59.978 { 00:08:59.978 "trtype": "TCP", 00:08:59.978 "adrfam": "IPv4", 00:08:59.978 "traddr": "10.0.0.2", 00:08:59.978 "trsvcid": "4420" 00:08:59.978 } 00:08:59.978 ], 00:08:59.978 "allow_any_host": true, 00:08:59.978 "hosts": [] 00:08:59.978 }, 00:08:59.978 { 00:08:59.978 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:59.978 "subtype": "NVMe", 00:08:59.978 "listen_addresses": [ 00:08:59.978 { 00:08:59.978 "trtype": "TCP", 00:08:59.978 "adrfam": "IPv4", 00:08:59.978 "traddr": "10.0.0.2", 00:08:59.978 "trsvcid": "4420" 00:08:59.978 } 00:08:59.978 ], 00:08:59.978 "allow_any_host": true, 00:08:59.979 "hosts": [], 00:08:59.979 "serial_number": "SPDK00000000000001", 00:08:59.979 "model_number": "SPDK bdev Controller", 00:08:59.979 "max_namespaces": 32, 00:08:59.979 "min_cntlid": 1, 00:08:59.979 "max_cntlid": 65519, 00:08:59.979 "namespaces": [ 00:08:59.979 { 00:08:59.979 "nsid": 1, 00:08:59.979 "bdev_name": "Null1", 00:08:59.979 "name": "Null1", 00:08:59.979 "nguid": "599FCBC03E0E458F920F2537DB16783B", 00:08:59.979 "uuid": "599fcbc0-3e0e-458f-920f-2537db16783b" 00:08:59.979 } 00:08:59.979 ] 00:08:59.979 }, 00:08:59.979 { 00:08:59.979 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:59.979 "subtype": "NVMe", 00:08:59.979 "listen_addresses": [ 00:08:59.979 { 00:08:59.979 "trtype": "TCP", 00:08:59.979 "adrfam": "IPv4", 00:08:59.979 "traddr": "10.0.0.2", 00:08:59.979 "trsvcid": "4420" 00:08:59.979 } 00:08:59.979 ], 00:08:59.979 "allow_any_host": true, 00:08:59.979 "hosts": [], 00:08:59.979 "serial_number": "SPDK00000000000002", 00:08:59.979 "model_number": "SPDK bdev Controller", 00:08:59.979 "max_namespaces": 32, 00:08:59.979 "min_cntlid": 1, 00:08:59.979 "max_cntlid": 65519, 00:08:59.979 "namespaces": [ 00:08:59.979 { 00:08:59.979 "nsid": 1, 00:08:59.979 "bdev_name": "Null2", 00:08:59.979 "name": "Null2", 00:08:59.979 "nguid": "E1318D99EC0749FBABCF6BAE8EFACDBA", 00:08:59.979 "uuid": "e1318d99-ec07-49fb-abcf-6bae8efacdba" 00:08:59.979 } 00:08:59.979 ] 00:08:59.979 }, 00:08:59.979 { 00:08:59.979 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:59.979 "subtype": "NVMe", 00:08:59.979 "listen_addresses": [ 00:08:59.979 { 00:08:59.979 "trtype": "TCP", 00:08:59.979 "adrfam": "IPv4", 00:08:59.979 "traddr": "10.0.0.2", 00:08:59.979 "trsvcid": "4420" 00:08:59.979 } 00:08:59.979 ], 00:08:59.979 "allow_any_host": true, 00:08:59.979 "hosts": [], 00:08:59.979 "serial_number": "SPDK00000000000003", 00:08:59.979 "model_number": "SPDK bdev Controller", 00:08:59.979 "max_namespaces": 32, 00:08:59.979 "min_cntlid": 1, 00:08:59.979 "max_cntlid": 65519, 00:08:59.979 "namespaces": [ 00:08:59.979 { 00:08:59.979 "nsid": 1, 00:08:59.979 "bdev_name": "Null3", 00:08:59.979 "name": "Null3", 00:08:59.979 "nguid": "089445EC728D4FD29ED346E38330D447", 00:08:59.979 "uuid": "089445ec-728d-4fd2-9ed3-46e38330d447" 00:08:59.979 } 00:08:59.979 ] 00:08:59.979 }, 00:08:59.979 { 00:08:59.979 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:59.979 "subtype": "NVMe", 00:08:59.979 "listen_addresses": [ 00:08:59.979 { 00:08:59.979 "trtype": "TCP", 00:08:59.979 "adrfam": "IPv4", 00:08:59.979 "traddr": "10.0.0.2", 00:08:59.979 "trsvcid": "4420" 00:08:59.979 } 00:08:59.979 ], 00:08:59.979 "allow_any_host": true, 00:08:59.979 "hosts": [], 00:08:59.979 "serial_number": "SPDK00000000000004", 00:08:59.979 "model_number": "SPDK bdev Controller", 00:08:59.979 "max_namespaces": 32, 00:08:59.979 "min_cntlid": 1, 00:08:59.979 "max_cntlid": 65519, 00:08:59.979 "namespaces": [ 00:08:59.979 { 00:08:59.979 "nsid": 1, 00:08:59.979 "bdev_name": "Null4", 00:08:59.979 "name": "Null4", 00:08:59.979 "nguid": "CEFDE6CBA46B4E6D9A14573A272008A4", 00:08:59.979 "uuid": "cefde6cb-a46b-4e6d-9a14-573a272008a4" 00:08:59.979 } 00:08:59.979 ] 00:08:59.979 } 00:08:59.979 ] 00:08:59.979 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.979 15:27:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:59.979 15:27:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:59.979 15:27:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.979 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.979 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.979 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.979 15:27:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:59.979 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.979 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.979 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.979 15:27:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:59.979 15:27:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:59.979 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.979 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.979 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.979 15:27:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:59.979 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.979 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:00.237 rmmod nvme_tcp 00:09:00.237 rmmod nvme_fabrics 00:09:00.237 rmmod nvme_keyring 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1202396 ']' 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1202396 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 1202396 ']' 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 1202396 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1202396 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1202396' 00:09:00.237 killing process with pid 1202396 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 1202396 00:09:00.237 [2024-05-15 15:27:13.265551] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:00.237 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 1202396 00:09:00.495 15:27:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:00.495 15:27:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:00.495 15:27:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:00.495 15:27:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:00.495 15:27:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:00.495 15:27:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.495 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.495 15:27:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.055 15:27:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:03.055 00:09:03.055 real 0m5.698s 00:09:03.055 user 0m4.178s 00:09:03.055 sys 0m2.108s 00:09:03.055 15:27:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:03.055 15:27:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:03.055 ************************************ 00:09:03.055 END TEST nvmf_target_discovery 00:09:03.055 ************************************ 00:09:03.055 15:27:15 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:03.055 15:27:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:03.055 15:27:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:03.055 15:27:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:03.055 ************************************ 00:09:03.055 START TEST nvmf_referrals 00:09:03.055 ************************************ 00:09:03.055 15:27:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:03.055 * Looking for test storage... 00:09:03.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.055 15:27:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.055 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:03.055 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.055 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.055 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.055 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.055 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.055 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.055 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.055 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.055 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.055 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.055 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:03.055 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:03.055 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.055 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.055 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:03.055 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:03.056 15:27:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:05.582 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:05.582 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.582 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:05.583 Found net devices under 0000:09:00.0: cvl_0_0 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:05.583 Found net devices under 0000:09:00.1: cvl_0_1 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:05.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:09:05.583 00:09:05.583 --- 10.0.0.2 ping statistics --- 00:09:05.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.583 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:05.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:09:05.583 00:09:05.583 --- 10.0.0.1 ping statistics --- 00:09:05.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.583 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1204783 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1204783 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 1204783 ']' 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:05.583 [2024-05-15 15:27:18.375898] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:09:05.583 [2024-05-15 15:27:18.375983] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.583 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.583 [2024-05-15 15:27:18.418635] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:05.583 [2024-05-15 15:27:18.450803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:05.583 [2024-05-15 15:27:18.536405] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.583 [2024-05-15 15:27:18.536464] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.583 [2024-05-15 15:27:18.536493] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.583 [2024-05-15 15:27:18.536506] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.583 [2024-05-15 15:27:18.536517] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.583 [2024-05-15 15:27:18.536593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.583 [2024-05-15 15:27:18.536643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.583 [2024-05-15 15:27:18.536756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:05.583 [2024-05-15 15:27:18.536759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.583 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:05.840 [2024-05-15 15:27:18.684983] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:05.840 [2024-05-15 15:27:18.696954] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:05.840 [2024-05-15 15:27:18.697252] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:05.840 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:06.098 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:06.098 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:06.098 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:06.098 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.098 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.098 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.098 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:06.098 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.098 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.098 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.098 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:06.098 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.098 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.098 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.098 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:06.098 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.098 15:27:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:06.098 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.098 15:27:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:06.098 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:06.099 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:06.099 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:06.099 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:06.099 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:06.099 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:06.099 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:06.099 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:06.099 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:06.099 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:06.099 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:06.356 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:06.356 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:06.356 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:06.356 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:06.356 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:06.356 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:06.356 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:06.356 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:06.356 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.356 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.356 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.356 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:06.356 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:06.356 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:06.356 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.356 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:06.356 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.356 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:06.356 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.613 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:06.613 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:06.613 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:06.613 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:06.613 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:06.613 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:06.613 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:06.613 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:06.613 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:06.613 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:06.613 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:06.613 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:06.613 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:06.613 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:06.613 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:06.613 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:06.613 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:06.613 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:06.613 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:06.614 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:06.614 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:06.871 rmmod nvme_tcp 00:09:06.871 rmmod nvme_fabrics 00:09:06.871 rmmod nvme_keyring 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1204783 ']' 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1204783 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 1204783 ']' 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 1204783 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:06.871 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1204783 00:09:07.130 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:07.130 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:07.130 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1204783' 00:09:07.130 killing process with pid 1204783 00:09:07.130 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 1204783 00:09:07.130 [2024-05-15 15:27:19.985552] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:07.130 15:27:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 1204783 00:09:07.130 15:27:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:07.130 15:27:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:07.130 15:27:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:07.130 15:27:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:07.130 15:27:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:07.130 15:27:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.130 15:27:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:07.130 15:27:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.659 15:27:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:09.659 00:09:09.659 real 0m6.673s 00:09:09.659 user 0m8.231s 00:09:09.659 sys 0m2.273s 00:09:09.659 15:27:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:09.659 15:27:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:09.659 ************************************ 00:09:09.659 END TEST nvmf_referrals 00:09:09.659 ************************************ 00:09:09.659 15:27:22 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:09.659 15:27:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:09.659 15:27:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:09.659 15:27:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:09.659 ************************************ 00:09:09.659 START TEST nvmf_connect_disconnect 00:09:09.659 ************************************ 00:09:09.659 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:09.659 * Looking for test storage... 00:09:09.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:09.659 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.659 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:09.659 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.659 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.659 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.659 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.659 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.659 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.659 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.659 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.659 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.659 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.659 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:09.659 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:09.659 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.659 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.659 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.659 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.659 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:09.659 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.659 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.659 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.659 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:09.660 15:27:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:12.181 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.181 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:12.182 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:12.182 Found net devices under 0000:09:00.0: cvl_0_0 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:12.182 Found net devices under 0000:09:00.1: cvl_0_1 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:12.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:09:12.182 00:09:12.182 --- 10.0.0.2 ping statistics --- 00:09:12.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.182 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:12.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:09:12.182 00:09:12.182 --- 10.0.0.1 ping statistics --- 00:09:12.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.182 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1207360 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1207360 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 1207360 ']' 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:12.182 15:27:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:12.182 [2024-05-15 15:27:24.988187] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:09:12.182 [2024-05-15 15:27:24.988289] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.182 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.182 [2024-05-15 15:27:25.031953] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:12.182 [2024-05-15 15:27:25.069901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:12.182 [2024-05-15 15:27:25.162785] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.182 [2024-05-15 15:27:25.162847] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.182 [2024-05-15 15:27:25.162872] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.182 [2024-05-15 15:27:25.162885] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.182 [2024-05-15 15:27:25.162896] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.182 [2024-05-15 15:27:25.163000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.182 [2024-05-15 15:27:25.163063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.182 [2024-05-15 15:27:25.163114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.182 [2024-05-15 15:27:25.163117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:12.446 [2024-05-15 15:27:25.330162] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:12.446 [2024-05-15 15:27:25.384619] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:12.446 [2024-05-15 15:27:25.384937] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:12.446 15:27:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:14.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.728 15:31:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:57.728 15:31:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:57.728 15:31:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:57.728 15:31:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:57.728 15:31:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:57.728 15:31:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:57.728 15:31:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:57.728 15:31:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:57.728 rmmod nvme_tcp 00:12:57.728 rmmod nvme_fabrics 00:12:57.728 rmmod nvme_keyring 00:12:57.728 15:31:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:57.728 15:31:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:57.728 15:31:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:57.728 15:31:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1207360 ']' 00:12:57.728 15:31:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1207360 00:12:57.728 15:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 1207360 ']' 00:12:57.728 15:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 1207360 00:12:57.728 15:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:12:57.728 15:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:57.728 15:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1207360 00:12:57.728 15:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:57.728 15:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:57.728 15:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1207360' 00:12:57.728 killing process with pid 1207360 00:12:57.728 15:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 1207360 00:12:57.728 [2024-05-15 15:31:10.826542] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:57.728 15:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 1207360 00:12:58.295 15:31:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:58.295 15:31:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:58.295 15:31:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:58.295 15:31:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:58.295 15:31:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:58.295 15:31:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.295 15:31:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.295 15:31:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.226 15:31:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:00.226 00:13:00.226 real 3m50.808s 00:13:00.226 user 14m37.222s 00:13:00.226 sys 0m31.135s 00:13:00.226 15:31:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:00.226 15:31:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:00.226 ************************************ 00:13:00.226 END TEST nvmf_connect_disconnect 00:13:00.226 ************************************ 00:13:00.226 15:31:13 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:00.226 15:31:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:00.226 15:31:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:00.226 15:31:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:00.226 ************************************ 00:13:00.226 START TEST nvmf_multitarget 00:13:00.226 ************************************ 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:00.226 * Looking for test storage... 00:13:00.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:13:00.226 15:31:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:03.508 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:03.508 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:03.508 Found net devices under 0000:09:00.0: cvl_0_0 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:03.508 Found net devices under 0000:09:00.1: cvl_0_1 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.508 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:03.509 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.509 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.509 15:31:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:03.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:13:03.509 00:13:03.509 --- 10.0.0.2 ping statistics --- 00:13:03.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.509 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:13:03.509 00:13:03.509 --- 10.0.0.1 ping statistics --- 00:13:03.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.509 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1238162 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1238162 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 1238162 ']' 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:03.509 [2024-05-15 15:31:16.081589] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:03.509 [2024-05-15 15:31:16.081678] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.509 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.509 [2024-05-15 15:31:16.125685] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:03.509 [2024-05-15 15:31:16.156795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:03.509 [2024-05-15 15:31:16.238910] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.509 [2024-05-15 15:31:16.238955] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.509 [2024-05-15 15:31:16.238983] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.509 [2024-05-15 15:31:16.238997] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.509 [2024-05-15 15:31:16.239007] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.509 [2024-05-15 15:31:16.239056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.509 [2024-05-15 15:31:16.239116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.509 [2024-05-15 15:31:16.239181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.509 [2024-05-15 15:31:16.239183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:03.509 "nvmf_tgt_1" 00:13:03.509 15:31:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:03.767 "nvmf_tgt_2" 00:13:03.767 15:31:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:03.767 15:31:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:03.767 15:31:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:03.767 15:31:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:04.025 true 00:13:04.025 15:31:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:04.025 true 00:13:04.025 15:31:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:04.025 15:31:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:04.284 15:31:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:04.284 15:31:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:04.284 15:31:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:04.284 15:31:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:04.284 15:31:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:13:04.284 15:31:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:04.284 15:31:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:13:04.284 15:31:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:04.284 15:31:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:04.284 rmmod nvme_tcp 00:13:04.284 rmmod nvme_fabrics 00:13:04.284 rmmod nvme_keyring 00:13:04.284 15:31:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:04.284 15:31:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:13:04.284 15:31:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:13:04.284 15:31:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1238162 ']' 00:13:04.284 15:31:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1238162 00:13:04.284 15:31:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 1238162 ']' 00:13:04.284 15:31:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 1238162 00:13:04.284 15:31:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:13:04.284 15:31:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:04.284 15:31:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1238162 00:13:04.284 15:31:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:04.284 15:31:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:04.284 15:31:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1238162' 00:13:04.284 killing process with pid 1238162 00:13:04.284 15:31:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 1238162 00:13:04.284 15:31:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 1238162 00:13:04.541 15:31:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:04.541 15:31:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:04.541 15:31:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:04.541 15:31:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:04.541 15:31:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:04.541 15:31:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.541 15:31:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.541 15:31:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.445 15:31:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:06.445 00:13:06.445 real 0m6.333s 00:13:06.445 user 0m6.583s 00:13:06.445 sys 0m2.342s 00:13:06.445 15:31:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:06.445 15:31:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:06.445 ************************************ 00:13:06.445 END TEST nvmf_multitarget 00:13:06.445 ************************************ 00:13:06.704 15:31:19 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:06.704 15:31:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:06.704 15:31:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:06.704 15:31:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:06.704 ************************************ 00:13:06.704 START TEST nvmf_rpc 00:13:06.704 ************************************ 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:06.704 * Looking for test storage... 00:13:06.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:13:06.704 15:31:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:09.229 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:09.229 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:09.229 Found net devices under 0000:09:00.0: cvl_0_0 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:09.229 Found net devices under 0000:09:00.1: cvl_0_1 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:09.229 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:09.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:13:09.487 00:13:09.487 --- 10.0.0.2 ping statistics --- 00:13:09.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.487 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:09.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:13:09.487 00:13:09.487 --- 10.0.0.1 ping statistics --- 00:13:09.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.487 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1240549 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1240549 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 1240549 ']' 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:09.487 15:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.487 [2024-05-15 15:31:22.485145] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:09.487 [2024-05-15 15:31:22.485236] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.487 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.487 [2024-05-15 15:31:22.528612] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:09.487 [2024-05-15 15:31:22.566206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.745 [2024-05-15 15:31:22.659226] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.745 [2024-05-15 15:31:22.659279] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.745 [2024-05-15 15:31:22.659296] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.745 [2024-05-15 15:31:22.659309] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.745 [2024-05-15 15:31:22.659321] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.745 [2024-05-15 15:31:22.659379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.745 [2024-05-15 15:31:22.659435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.745 [2024-05-15 15:31:22.659501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.745 [2024-05-15 15:31:22.659503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.745 15:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:09.745 15:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:13:09.745 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:09.745 15:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:09.745 15:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.745 15:31:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.745 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:09.746 15:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.746 15:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.746 15:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.746 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:09.746 "tick_rate": 2700000000, 00:13:09.746 "poll_groups": [ 00:13:09.746 { 00:13:09.746 "name": "nvmf_tgt_poll_group_000", 00:13:09.746 "admin_qpairs": 0, 00:13:09.746 "io_qpairs": 0, 00:13:09.746 "current_admin_qpairs": 0, 00:13:09.746 "current_io_qpairs": 0, 00:13:09.746 "pending_bdev_io": 0, 00:13:09.746 "completed_nvme_io": 0, 00:13:09.746 "transports": [] 00:13:09.746 }, 00:13:09.746 { 00:13:09.746 "name": "nvmf_tgt_poll_group_001", 00:13:09.746 "admin_qpairs": 0, 00:13:09.746 "io_qpairs": 0, 00:13:09.746 "current_admin_qpairs": 0, 00:13:09.746 "current_io_qpairs": 0, 00:13:09.746 "pending_bdev_io": 0, 00:13:09.746 "completed_nvme_io": 0, 00:13:09.746 "transports": [] 00:13:09.746 }, 00:13:09.746 { 00:13:09.746 "name": "nvmf_tgt_poll_group_002", 00:13:09.746 "admin_qpairs": 0, 00:13:09.746 "io_qpairs": 0, 00:13:09.746 "current_admin_qpairs": 0, 00:13:09.746 "current_io_qpairs": 0, 00:13:09.746 "pending_bdev_io": 0, 00:13:09.746 "completed_nvme_io": 0, 00:13:09.746 "transports": [] 00:13:09.746 }, 00:13:09.746 { 00:13:09.746 "name": "nvmf_tgt_poll_group_003", 00:13:09.746 "admin_qpairs": 0, 00:13:09.746 "io_qpairs": 0, 00:13:09.746 "current_admin_qpairs": 0, 00:13:09.746 "current_io_qpairs": 0, 00:13:09.746 "pending_bdev_io": 0, 00:13:09.746 "completed_nvme_io": 0, 00:13:09.746 "transports": [] 00:13:09.746 } 00:13:09.746 ] 00:13:09.746 }' 00:13:09.746 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:09.746 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:09.746 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:09.746 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:10.004 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:10.004 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:10.004 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:10.004 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:10.004 15:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.004 15:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.004 [2024-05-15 15:31:22.903171] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.004 15:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.004 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:10.004 15:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.004 15:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.004 15:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.004 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:10.004 "tick_rate": 2700000000, 00:13:10.004 "poll_groups": [ 00:13:10.004 { 00:13:10.004 "name": "nvmf_tgt_poll_group_000", 00:13:10.004 "admin_qpairs": 0, 00:13:10.004 "io_qpairs": 0, 00:13:10.004 "current_admin_qpairs": 0, 00:13:10.004 "current_io_qpairs": 0, 00:13:10.004 "pending_bdev_io": 0, 00:13:10.004 "completed_nvme_io": 0, 00:13:10.004 "transports": [ 00:13:10.004 { 00:13:10.004 "trtype": "TCP" 00:13:10.004 } 00:13:10.004 ] 00:13:10.004 }, 00:13:10.004 { 00:13:10.004 "name": "nvmf_tgt_poll_group_001", 00:13:10.004 "admin_qpairs": 0, 00:13:10.004 "io_qpairs": 0, 00:13:10.004 "current_admin_qpairs": 0, 00:13:10.004 "current_io_qpairs": 0, 00:13:10.004 "pending_bdev_io": 0, 00:13:10.004 "completed_nvme_io": 0, 00:13:10.004 "transports": [ 00:13:10.004 { 00:13:10.004 "trtype": "TCP" 00:13:10.004 } 00:13:10.004 ] 00:13:10.004 }, 00:13:10.004 { 00:13:10.004 "name": "nvmf_tgt_poll_group_002", 00:13:10.004 "admin_qpairs": 0, 00:13:10.005 "io_qpairs": 0, 00:13:10.005 "current_admin_qpairs": 0, 00:13:10.005 "current_io_qpairs": 0, 00:13:10.005 "pending_bdev_io": 0, 00:13:10.005 "completed_nvme_io": 0, 00:13:10.005 "transports": [ 00:13:10.005 { 00:13:10.005 "trtype": "TCP" 00:13:10.005 } 00:13:10.005 ] 00:13:10.005 }, 00:13:10.005 { 00:13:10.005 "name": "nvmf_tgt_poll_group_003", 00:13:10.005 "admin_qpairs": 0, 00:13:10.005 "io_qpairs": 0, 00:13:10.005 "current_admin_qpairs": 0, 00:13:10.005 "current_io_qpairs": 0, 00:13:10.005 "pending_bdev_io": 0, 00:13:10.005 "completed_nvme_io": 0, 00:13:10.005 "transports": [ 00:13:10.005 { 00:13:10.005 "trtype": "TCP" 00:13:10.005 } 00:13:10.005 ] 00:13:10.005 } 00:13:10.005 ] 00:13:10.005 }' 00:13:10.005 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:10.005 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:10.005 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:10.005 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:10.005 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:10.005 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:10.005 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:10.005 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:10.005 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:10.005 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:10.005 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:10.005 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:10.005 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:10.005 15:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:10.005 15:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.005 15:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.005 Malloc1 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.005 [2024-05-15 15:31:23.049871] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:10.005 [2024-05-15 15:31:23.050184] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:13:10.005 [2024-05-15 15:31:23.072538] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:13:10.005 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:10.005 could not add new controller: failed to write to nvme-fabrics device 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.005 15:31:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:10.936 15:31:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:10.936 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:10.936 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.936 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:10.936 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.833 [2024-05-15 15:31:25.842614] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:13:12.833 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:12.833 could not add new controller: failed to write to nvme-fabrics device 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.833 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:13.397 15:31:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:13.397 15:31:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:13.397 15:31:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:13.397 15:31:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:13.397 15:31:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:15.922 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:15.922 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:15.922 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:15.922 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:15.922 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:15.922 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:15.922 15:31:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:15.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.922 15:31:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:15.922 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:15.922 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:15.922 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.922 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:15.922 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.922 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:15.922 15:31:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.922 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.922 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.922 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.922 15:31:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:15.922 15:31:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:15.922 15:31:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.923 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.923 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.923 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.923 15:31:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.923 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.923 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.923 [2024-05-15 15:31:28.612926] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.923 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.923 15:31:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:15.923 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.923 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.923 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.923 15:31:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.923 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.923 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.923 15:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.923 15:31:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:16.180 15:31:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:16.180 15:31:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:16.180 15:31:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:16.180 15:31:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:16.180 15:31:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:18.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.709 [2024-05-15 15:31:31.364970] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.709 15:31:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:18.966 15:31:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:18.966 15:31:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:18.966 15:31:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:18.966 15:31:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:18.966 15:31:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:21.533 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:21.533 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:21.533 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:21.533 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:21.533 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.533 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:21.533 15:31:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:21.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.533 15:31:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:21.533 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:21.533 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:21.533 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.534 [2024-05-15 15:31:34.117927] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.534 15:31:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.792 15:31:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:21.792 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:21.792 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.792 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:21.792 15:31:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:23.690 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:23.691 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:23.691 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:23.691 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:23.691 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:23.691 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:23.691 15:31:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:23.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.949 [2024-05-15 15:31:36.870916] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.949 15:31:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:24.515 15:31:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:24.515 15:31:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:24.515 15:31:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:24.515 15:31:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:24.515 15:31:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:26.413 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:26.413 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:26.413 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:26.413 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:26.413 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:26.413 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:26.413 15:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:26.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.413 15:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:26.413 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:26.413 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:26.413 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.413 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:26.413 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.671 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:26.671 15:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:26.671 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.671 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.671 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.671 15:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.671 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.671 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.672 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.672 15:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:26.672 15:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:26.672 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.672 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.672 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.672 15:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.672 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.672 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.672 [2024-05-15 15:31:39.550419] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.672 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.672 15:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:26.672 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.672 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.672 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.672 15:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:26.672 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.672 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.672 15:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.672 15:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:27.237 15:31:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:27.237 15:31:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:27.237 15:31:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:27.237 15:31:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:27.237 15:31:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:29.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.135 [2024-05-15 15:31:42.232305] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.135 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 [2024-05-15 15:31:42.280328] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.395 [2024-05-15 15:31:42.328484] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.395 [2024-05-15 15:31:42.376646] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.395 [2024-05-15 15:31:42.424817] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:29.395 "tick_rate": 2700000000, 00:13:29.395 "poll_groups": [ 00:13:29.395 { 00:13:29.395 "name": "nvmf_tgt_poll_group_000", 00:13:29.395 "admin_qpairs": 2, 00:13:29.395 "io_qpairs": 84, 00:13:29.395 "current_admin_qpairs": 0, 00:13:29.395 "current_io_qpairs": 0, 00:13:29.395 "pending_bdev_io": 0, 00:13:29.395 "completed_nvme_io": 221, 00:13:29.395 "transports": [ 00:13:29.395 { 00:13:29.395 "trtype": "TCP" 00:13:29.395 } 00:13:29.395 ] 00:13:29.395 }, 00:13:29.395 { 00:13:29.395 "name": "nvmf_tgt_poll_group_001", 00:13:29.395 "admin_qpairs": 2, 00:13:29.395 "io_qpairs": 84, 00:13:29.395 "current_admin_qpairs": 0, 00:13:29.395 "current_io_qpairs": 0, 00:13:29.395 "pending_bdev_io": 0, 00:13:29.395 "completed_nvme_io": 233, 00:13:29.395 "transports": [ 00:13:29.395 { 00:13:29.395 "trtype": "TCP" 00:13:29.395 } 00:13:29.395 ] 00:13:29.395 }, 00:13:29.395 { 00:13:29.395 "name": "nvmf_tgt_poll_group_002", 00:13:29.395 "admin_qpairs": 1, 00:13:29.395 "io_qpairs": 84, 00:13:29.395 "current_admin_qpairs": 0, 00:13:29.395 "current_io_qpairs": 0, 00:13:29.395 "pending_bdev_io": 0, 00:13:29.395 "completed_nvme_io": 135, 00:13:29.395 "transports": [ 00:13:29.395 { 00:13:29.395 "trtype": "TCP" 00:13:29.395 } 00:13:29.395 ] 00:13:29.395 }, 00:13:29.395 { 00:13:29.395 "name": "nvmf_tgt_poll_group_003", 00:13:29.395 "admin_qpairs": 2, 00:13:29.395 "io_qpairs": 84, 00:13:29.395 "current_admin_qpairs": 0, 00:13:29.395 "current_io_qpairs": 0, 00:13:29.395 "pending_bdev_io": 0, 00:13:29.395 "completed_nvme_io": 97, 00:13:29.395 "transports": [ 00:13:29.395 { 00:13:29.395 "trtype": "TCP" 00:13:29.395 } 00:13:29.395 ] 00:13:29.395 } 00:13:29.395 ] 00:13:29.395 }' 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:29.395 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:29.653 rmmod nvme_tcp 00:13:29.653 rmmod nvme_fabrics 00:13:29.653 rmmod nvme_keyring 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1240549 ']' 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1240549 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 1240549 ']' 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 1240549 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1240549 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1240549' 00:13:29.653 killing process with pid 1240549 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 1240549 00:13:29.653 [2024-05-15 15:31:42.617291] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:29.653 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 1240549 00:13:29.911 15:31:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:29.911 15:31:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:29.911 15:31:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:29.911 15:31:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:29.911 15:31:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:29.911 15:31:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.911 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:29.911 15:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.444 15:31:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:32.444 00:13:32.444 real 0m25.346s 00:13:32.444 user 1m20.148s 00:13:32.444 sys 0m4.212s 00:13:32.444 15:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:32.444 15:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.444 ************************************ 00:13:32.444 END TEST nvmf_rpc 00:13:32.444 ************************************ 00:13:32.444 15:31:44 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:32.444 15:31:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:32.444 15:31:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:32.444 15:31:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:32.444 ************************************ 00:13:32.444 START TEST nvmf_invalid 00:13:32.444 ************************************ 00:13:32.444 15:31:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:32.444 * Looking for test storage... 00:13:32.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:32.444 15:31:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:32.444 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:32.444 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.444 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.444 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.444 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.444 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.444 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.444 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.444 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.444 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.444 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:32.445 15:31:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:34.980 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:34.980 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:34.980 Found net devices under 0000:09:00.0: cvl_0_0 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:34.980 Found net devices under 0000:09:00.1: cvl_0_1 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:34.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:13:34.980 00:13:34.980 --- 10.0.0.2 ping statistics --- 00:13:34.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.980 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:34.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:13:34.980 00:13:34.980 --- 10.0.0.1 ping statistics --- 00:13:34.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.980 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:34.980 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:34.981 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.981 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:34.981 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:34.981 15:31:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:34.981 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:34.981 15:31:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:34.981 15:31:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:34.981 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1245363 00:13:34.981 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:34.981 15:31:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1245363 00:13:34.981 15:31:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 1245363 ']' 00:13:34.981 15:31:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.981 15:31:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:34.981 15:31:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.981 15:31:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:34.981 15:31:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:34.981 [2024-05-15 15:31:47.838385] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:34.981 [2024-05-15 15:31:47.838464] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.981 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.981 [2024-05-15 15:31:47.886071] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:34.981 [2024-05-15 15:31:47.924898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:34.981 [2024-05-15 15:31:48.019817] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.981 [2024-05-15 15:31:48.019872] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.981 [2024-05-15 15:31:48.019889] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:34.981 [2024-05-15 15:31:48.019902] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:34.981 [2024-05-15 15:31:48.019914] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.981 [2024-05-15 15:31:48.019995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.981 [2024-05-15 15:31:48.020048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.981 [2024-05-15 15:31:48.020073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:34.981 [2024-05-15 15:31:48.020077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.239 15:31:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:35.239 15:31:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:13:35.239 15:31:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:35.239 15:31:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:35.239 15:31:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:35.239 15:31:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.239 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:35.239 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode27410 00:13:35.495 [2024-05-15 15:31:48.392743] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:35.495 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:35.495 { 00:13:35.495 "nqn": "nqn.2016-06.io.spdk:cnode27410", 00:13:35.495 "tgt_name": "foobar", 00:13:35.495 "method": "nvmf_create_subsystem", 00:13:35.495 "req_id": 1 00:13:35.495 } 00:13:35.495 Got JSON-RPC error response 00:13:35.495 response: 00:13:35.495 { 00:13:35.496 "code": -32603, 00:13:35.496 "message": "Unable to find target foobar" 00:13:35.496 }' 00:13:35.496 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:35.496 { 00:13:35.496 "nqn": "nqn.2016-06.io.spdk:cnode27410", 00:13:35.496 "tgt_name": "foobar", 00:13:35.496 "method": "nvmf_create_subsystem", 00:13:35.496 "req_id": 1 00:13:35.496 } 00:13:35.496 Got JSON-RPC error response 00:13:35.496 response: 00:13:35.496 { 00:13:35.496 "code": -32603, 00:13:35.496 "message": "Unable to find target foobar" 00:13:35.496 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:35.496 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:35.496 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17177 00:13:35.753 [2024-05-15 15:31:48.637592] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17177: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:35.753 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:35.753 { 00:13:35.753 "nqn": "nqn.2016-06.io.spdk:cnode17177", 00:13:35.753 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:35.753 "method": "nvmf_create_subsystem", 00:13:35.753 "req_id": 1 00:13:35.753 } 00:13:35.753 Got JSON-RPC error response 00:13:35.753 response: 00:13:35.753 { 00:13:35.753 "code": -32602, 00:13:35.753 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:35.753 }' 00:13:35.753 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:35.753 { 00:13:35.753 "nqn": "nqn.2016-06.io.spdk:cnode17177", 00:13:35.753 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:35.753 "method": "nvmf_create_subsystem", 00:13:35.753 "req_id": 1 00:13:35.753 } 00:13:35.753 Got JSON-RPC error response 00:13:35.753 response: 00:13:35.753 { 00:13:35.753 "code": -32602, 00:13:35.753 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:35.753 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:35.753 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:35.753 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17134 00:13:36.011 [2024-05-15 15:31:48.886391] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17134: invalid model number 'SPDK_Controller' 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:36.011 { 00:13:36.011 "nqn": "nqn.2016-06.io.spdk:cnode17134", 00:13:36.011 "model_number": "SPDK_Controller\u001f", 00:13:36.011 "method": "nvmf_create_subsystem", 00:13:36.011 "req_id": 1 00:13:36.011 } 00:13:36.011 Got JSON-RPC error response 00:13:36.011 response: 00:13:36.011 { 00:13:36.011 "code": -32602, 00:13:36.011 "message": "Invalid MN SPDK_Controller\u001f" 00:13:36.011 }' 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:36.011 { 00:13:36.011 "nqn": "nqn.2016-06.io.spdk:cnode17134", 00:13:36.011 "model_number": "SPDK_Controller\u001f", 00:13:36.011 "method": "nvmf_create_subsystem", 00:13:36.011 "req_id": 1 00:13:36.011 } 00:13:36.011 Got JSON-RPC error response 00:13:36.011 response: 00:13:36.011 { 00:13:36.011 "code": -32602, 00:13:36.011 "message": "Invalid MN SPDK_Controller\u001f" 00:13:36.011 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:36.011 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ e == \- ]] 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'e6_~C} "wQ{f{B9[Z#F1~' 00:13:36.012 15:31:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'e6_~C} "wQ{f{B9[Z#F1~' nqn.2016-06.io.spdk:cnode21024 00:13:36.271 [2024-05-15 15:31:49.223556] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21024: invalid serial number 'e6_~C} "wQ{f{B9[Z#F1~' 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:36.271 { 00:13:36.271 "nqn": "nqn.2016-06.io.spdk:cnode21024", 00:13:36.271 "serial_number": "e6_~C} \"wQ{f{B9[Z#F1~", 00:13:36.271 "method": "nvmf_create_subsystem", 00:13:36.271 "req_id": 1 00:13:36.271 } 00:13:36.271 Got JSON-RPC error response 00:13:36.271 response: 00:13:36.271 { 00:13:36.271 "code": -32602, 00:13:36.271 "message": "Invalid SN e6_~C} \"wQ{f{B9[Z#F1~" 00:13:36.271 }' 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:36.271 { 00:13:36.271 "nqn": "nqn.2016-06.io.spdk:cnode21024", 00:13:36.271 "serial_number": "e6_~C} \"wQ{f{B9[Z#F1~", 00:13:36.271 "method": "nvmf_create_subsystem", 00:13:36.271 "req_id": 1 00:13:36.271 } 00:13:36.271 Got JSON-RPC error response 00:13:36.271 response: 00:13:36.271 { 00:13:36.271 "code": -32602, 00:13:36.271 "message": "Invalid SN e6_~C} \"wQ{f{B9[Z#F1~" 00:13:36.271 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:36.271 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.272 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ i == \- ]] 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'ihN-q^Yhtp/Q([IYp%8|;l/7\-eml6My1}xB+Z7f' 00:13:36.273 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'ihN-q^Yhtp/Q([IYp%8|;l/7\-eml6My1}xB+Z7f' nqn.2016-06.io.spdk:cnode8141 00:13:36.530 [2024-05-15 15:31:49.580712] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8141: invalid model number 'ihN-q^Yhtp/Q([IYp%8|;l/7\-eml6My1}xB+Z7f' 00:13:36.530 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:36.530 { 00:13:36.530 "nqn": "nqn.2016-06.io.spdk:cnode8141", 00:13:36.530 "model_number": "ihN-q^Yhtp/Q\u007f([IYp%8|;l/7\\-eml6My1}xB+Z7f", 00:13:36.530 "method": "nvmf_create_subsystem", 00:13:36.530 "req_id": 1 00:13:36.530 } 00:13:36.530 Got JSON-RPC error response 00:13:36.530 response: 00:13:36.530 { 00:13:36.530 "code": -32602, 00:13:36.530 "message": "Invalid MN ihN-q^Yhtp/Q\u007f([IYp%8|;l/7\\-eml6My1}xB+Z7f" 00:13:36.530 }' 00:13:36.530 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:36.530 { 00:13:36.530 "nqn": "nqn.2016-06.io.spdk:cnode8141", 00:13:36.530 "model_number": "ihN-q^Yhtp/Q\u007f([IYp%8|;l/7\\-eml6My1}xB+Z7f", 00:13:36.530 "method": "nvmf_create_subsystem", 00:13:36.530 "req_id": 1 00:13:36.530 } 00:13:36.530 Got JSON-RPC error response 00:13:36.530 response: 00:13:36.530 { 00:13:36.530 "code": -32602, 00:13:36.530 "message": "Invalid MN ihN-q^Yhtp/Q\u007f([IYp%8|;l/7\\-eml6My1}xB+Z7f" 00:13:36.530 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:36.530 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:36.787 [2024-05-15 15:31:49.821588] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.787 15:31:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:37.044 15:31:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:37.044 15:31:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:37.044 15:31:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:37.044 15:31:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:37.044 15:31:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:37.300 [2024-05-15 15:31:50.359293] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:37.300 [2024-05-15 15:31:50.359413] nvmf_rpc.c: 794:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:37.300 15:31:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:37.300 { 00:13:37.300 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:37.300 "listen_address": { 00:13:37.300 "trtype": "tcp", 00:13:37.300 "traddr": "", 00:13:37.300 "trsvcid": "4421" 00:13:37.300 }, 00:13:37.300 "method": "nvmf_subsystem_remove_listener", 00:13:37.300 "req_id": 1 00:13:37.300 } 00:13:37.300 Got JSON-RPC error response 00:13:37.300 response: 00:13:37.300 { 00:13:37.300 "code": -32602, 00:13:37.300 "message": "Invalid parameters" 00:13:37.300 }' 00:13:37.300 15:31:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:37.300 { 00:13:37.300 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:37.300 "listen_address": { 00:13:37.300 "trtype": "tcp", 00:13:37.300 "traddr": "", 00:13:37.300 "trsvcid": "4421" 00:13:37.300 }, 00:13:37.300 "method": "nvmf_subsystem_remove_listener", 00:13:37.300 "req_id": 1 00:13:37.300 } 00:13:37.300 Got JSON-RPC error response 00:13:37.300 response: 00:13:37.300 { 00:13:37.300 "code": -32602, 00:13:37.300 "message": "Invalid parameters" 00:13:37.300 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:37.300 15:31:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23972 -i 0 00:13:37.558 [2024-05-15 15:31:50.616125] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23972: invalid cntlid range [0-65519] 00:13:37.558 15:31:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:37.558 { 00:13:37.558 "nqn": "nqn.2016-06.io.spdk:cnode23972", 00:13:37.558 "min_cntlid": 0, 00:13:37.558 "method": "nvmf_create_subsystem", 00:13:37.558 "req_id": 1 00:13:37.558 } 00:13:37.558 Got JSON-RPC error response 00:13:37.558 response: 00:13:37.558 { 00:13:37.558 "code": -32602, 00:13:37.558 "message": "Invalid cntlid range [0-65519]" 00:13:37.558 }' 00:13:37.558 15:31:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:37.558 { 00:13:37.558 "nqn": "nqn.2016-06.io.spdk:cnode23972", 00:13:37.558 "min_cntlid": 0, 00:13:37.558 "method": "nvmf_create_subsystem", 00:13:37.558 "req_id": 1 00:13:37.558 } 00:13:37.558 Got JSON-RPC error response 00:13:37.558 response: 00:13:37.558 { 00:13:37.558 "code": -32602, 00:13:37.558 "message": "Invalid cntlid range [0-65519]" 00:13:37.558 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:37.558 15:31:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22193 -i 65520 00:13:37.815 [2024-05-15 15:31:50.868980] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22193: invalid cntlid range [65520-65519] 00:13:37.815 15:31:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:37.815 { 00:13:37.815 "nqn": "nqn.2016-06.io.spdk:cnode22193", 00:13:37.815 "min_cntlid": 65520, 00:13:37.815 "method": "nvmf_create_subsystem", 00:13:37.815 "req_id": 1 00:13:37.815 } 00:13:37.815 Got JSON-RPC error response 00:13:37.815 response: 00:13:37.815 { 00:13:37.815 "code": -32602, 00:13:37.815 "message": "Invalid cntlid range [65520-65519]" 00:13:37.815 }' 00:13:37.815 15:31:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:37.815 { 00:13:37.815 "nqn": "nqn.2016-06.io.spdk:cnode22193", 00:13:37.815 "min_cntlid": 65520, 00:13:37.815 "method": "nvmf_create_subsystem", 00:13:37.815 "req_id": 1 00:13:37.815 } 00:13:37.815 Got JSON-RPC error response 00:13:37.815 response: 00:13:37.815 { 00:13:37.815 "code": -32602, 00:13:37.815 "message": "Invalid cntlid range [65520-65519]" 00:13:37.815 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:37.815 15:31:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30072 -I 0 00:13:38.073 [2024-05-15 15:31:51.105816] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30072: invalid cntlid range [1-0] 00:13:38.073 15:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:38.073 { 00:13:38.073 "nqn": "nqn.2016-06.io.spdk:cnode30072", 00:13:38.073 "max_cntlid": 0, 00:13:38.073 "method": "nvmf_create_subsystem", 00:13:38.073 "req_id": 1 00:13:38.073 } 00:13:38.073 Got JSON-RPC error response 00:13:38.073 response: 00:13:38.073 { 00:13:38.073 "code": -32602, 00:13:38.073 "message": "Invalid cntlid range [1-0]" 00:13:38.073 }' 00:13:38.073 15:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:38.073 { 00:13:38.073 "nqn": "nqn.2016-06.io.spdk:cnode30072", 00:13:38.073 "max_cntlid": 0, 00:13:38.073 "method": "nvmf_create_subsystem", 00:13:38.073 "req_id": 1 00:13:38.073 } 00:13:38.073 Got JSON-RPC error response 00:13:38.073 response: 00:13:38.073 { 00:13:38.073 "code": -32602, 00:13:38.073 "message": "Invalid cntlid range [1-0]" 00:13:38.073 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:38.073 15:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20491 -I 65520 00:13:38.368 [2024-05-15 15:31:51.366718] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20491: invalid cntlid range [1-65520] 00:13:38.368 15:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:38.368 { 00:13:38.368 "nqn": "nqn.2016-06.io.spdk:cnode20491", 00:13:38.368 "max_cntlid": 65520, 00:13:38.368 "method": "nvmf_create_subsystem", 00:13:38.368 "req_id": 1 00:13:38.368 } 00:13:38.368 Got JSON-RPC error response 00:13:38.368 response: 00:13:38.368 { 00:13:38.368 "code": -32602, 00:13:38.368 "message": "Invalid cntlid range [1-65520]" 00:13:38.368 }' 00:13:38.368 15:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:38.368 { 00:13:38.368 "nqn": "nqn.2016-06.io.spdk:cnode20491", 00:13:38.368 "max_cntlid": 65520, 00:13:38.368 "method": "nvmf_create_subsystem", 00:13:38.368 "req_id": 1 00:13:38.368 } 00:13:38.368 Got JSON-RPC error response 00:13:38.368 response: 00:13:38.368 { 00:13:38.368 "code": -32602, 00:13:38.368 "message": "Invalid cntlid range [1-65520]" 00:13:38.368 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:38.368 15:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29758 -i 6 -I 5 00:13:38.625 [2024-05-15 15:31:51.611545] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29758: invalid cntlid range [6-5] 00:13:38.625 15:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:38.625 { 00:13:38.625 "nqn": "nqn.2016-06.io.spdk:cnode29758", 00:13:38.625 "min_cntlid": 6, 00:13:38.625 "max_cntlid": 5, 00:13:38.625 "method": "nvmf_create_subsystem", 00:13:38.625 "req_id": 1 00:13:38.625 } 00:13:38.625 Got JSON-RPC error response 00:13:38.625 response: 00:13:38.625 { 00:13:38.625 "code": -32602, 00:13:38.625 "message": "Invalid cntlid range [6-5]" 00:13:38.625 }' 00:13:38.625 15:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:38.625 { 00:13:38.625 "nqn": "nqn.2016-06.io.spdk:cnode29758", 00:13:38.625 "min_cntlid": 6, 00:13:38.625 "max_cntlid": 5, 00:13:38.625 "method": "nvmf_create_subsystem", 00:13:38.625 "req_id": 1 00:13:38.625 } 00:13:38.625 Got JSON-RPC error response 00:13:38.625 response: 00:13:38.625 { 00:13:38.625 "code": -32602, 00:13:38.625 "message": "Invalid cntlid range [6-5]" 00:13:38.625 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:38.625 15:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:38.882 15:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:38.882 { 00:13:38.882 "name": "foobar", 00:13:38.882 "method": "nvmf_delete_target", 00:13:38.882 "req_id": 1 00:13:38.882 } 00:13:38.882 Got JSON-RPC error response 00:13:38.882 response: 00:13:38.882 { 00:13:38.882 "code": -32602, 00:13:38.882 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:38.882 }' 00:13:38.882 15:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:38.882 { 00:13:38.882 "name": "foobar", 00:13:38.882 "method": "nvmf_delete_target", 00:13:38.882 "req_id": 1 00:13:38.882 } 00:13:38.882 Got JSON-RPC error response 00:13:38.882 response: 00:13:38.882 { 00:13:38.882 "code": -32602, 00:13:38.882 "message": "The specified target doesn't exist, cannot delete it." 00:13:38.882 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:38.882 15:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:38.882 15:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:38.883 15:31:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:38.883 15:31:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:38.883 15:31:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:38.883 15:31:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:38.883 15:31:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:38.883 15:31:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:38.883 rmmod nvme_tcp 00:13:38.883 rmmod nvme_fabrics 00:13:38.883 rmmod nvme_keyring 00:13:38.883 15:31:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:38.883 15:31:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:38.883 15:31:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:38.883 15:31:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1245363 ']' 00:13:38.883 15:31:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1245363 00:13:38.883 15:31:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 1245363 ']' 00:13:38.883 15:31:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 1245363 00:13:38.883 15:31:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:13:38.883 15:31:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:38.883 15:31:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1245363 00:13:38.883 15:31:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:38.883 15:31:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:38.883 15:31:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1245363' 00:13:38.883 killing process with pid 1245363 00:13:38.883 15:31:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 1245363 00:13:38.883 [2024-05-15 15:31:51.818845] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:38.883 15:31:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 1245363 00:13:39.141 15:31:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:39.141 15:31:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:39.141 15:31:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:39.141 15:31:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:39.141 15:31:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:39.141 15:31:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.141 15:31:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.142 15:31:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.043 15:31:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:41.043 00:13:41.043 real 0m9.110s 00:13:41.043 user 0m19.880s 00:13:41.043 sys 0m2.778s 00:13:41.044 15:31:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:41.044 15:31:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:41.044 ************************************ 00:13:41.044 END TEST nvmf_invalid 00:13:41.044 ************************************ 00:13:41.044 15:31:54 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:41.044 15:31:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:41.044 15:31:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:41.044 15:31:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:41.300 ************************************ 00:13:41.300 START TEST nvmf_abort 00:13:41.300 ************************************ 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:41.300 * Looking for test storage... 00:13:41.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:41.300 15:31:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:43.843 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:43.843 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:43.843 Found net devices under 0000:09:00.0: cvl_0_0 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.843 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:43.844 Found net devices under 0000:09:00.1: cvl_0_1 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:43.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:13:43.844 00:13:43.844 --- 10.0.0.2 ping statistics --- 00:13:43.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.844 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:13:43.844 00:13:43.844 --- 10.0.0.1 ping statistics --- 00:13:43.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.844 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1248324 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1248324 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 1248324 ']' 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:43.844 15:31:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:43.844 [2024-05-15 15:31:56.928348] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:43.844 [2024-05-15 15:31:56.928420] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.102 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.102 [2024-05-15 15:31:56.972822] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:44.102 [2024-05-15 15:31:57.010176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:44.102 [2024-05-15 15:31:57.102074] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.102 [2024-05-15 15:31:57.102139] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.102 [2024-05-15 15:31:57.102157] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.102 [2024-05-15 15:31:57.102170] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.102 [2024-05-15 15:31:57.102182] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.102 [2024-05-15 15:31:57.102287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.102 [2024-05-15 15:31:57.102339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:44.102 [2024-05-15 15:31:57.102347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.360 [2024-05-15 15:31:57.247069] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.360 Malloc0 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.360 Delay0 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.360 [2024-05-15 15:31:57.323756] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:44.360 [2024-05-15 15:31:57.324080] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.360 15:31:57 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:44.360 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.360 [2024-05-15 15:31:57.429302] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:46.885 Initializing NVMe Controllers 00:13:46.885 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:46.885 controller IO queue size 128 less than required 00:13:46.885 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:46.885 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:46.885 Initialization complete. Launching workers. 00:13:46.885 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 30028 00:13:46.885 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30093, failed to submit 62 00:13:46.885 success 30032, unsuccess 61, failed 0 00:13:46.885 15:31:59 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:46.885 15:31:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.885 15:31:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:46.885 15:31:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.885 15:31:59 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:46.885 15:31:59 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:46.885 15:31:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:46.885 15:31:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:46.885 15:31:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:46.885 15:31:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:46.885 15:31:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:46.886 15:31:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:46.886 rmmod nvme_tcp 00:13:46.886 rmmod nvme_fabrics 00:13:46.886 rmmod nvme_keyring 00:13:46.886 15:31:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:46.886 15:31:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:46.886 15:31:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:46.886 15:31:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1248324 ']' 00:13:46.886 15:31:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1248324 00:13:46.886 15:31:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 1248324 ']' 00:13:46.886 15:31:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 1248324 00:13:46.886 15:31:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:13:46.886 15:31:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:46.886 15:31:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1248324 00:13:46.886 15:31:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:46.886 15:31:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:46.886 15:31:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1248324' 00:13:46.886 killing process with pid 1248324 00:13:46.886 15:31:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 1248324 00:13:46.886 [2024-05-15 15:31:59.679816] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:46.886 15:31:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 1248324 00:13:46.886 15:31:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:46.886 15:31:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:46.886 15:31:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:46.886 15:31:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:46.886 15:31:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:46.886 15:31:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.886 15:31:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.886 15:31:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.414 15:32:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:49.414 00:13:49.414 real 0m7.843s 00:13:49.414 user 0m10.764s 00:13:49.414 sys 0m2.970s 00:13:49.414 15:32:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:49.414 15:32:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:49.414 ************************************ 00:13:49.414 END TEST nvmf_abort 00:13:49.414 ************************************ 00:13:49.414 15:32:02 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:49.414 15:32:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:49.414 15:32:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:49.414 15:32:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:49.414 ************************************ 00:13:49.414 START TEST nvmf_ns_hotplug_stress 00:13:49.414 ************************************ 00:13:49.414 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:49.414 * Looking for test storage... 00:13:49.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.414 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.414 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:49.414 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.414 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.414 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.414 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:49.415 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:51.940 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:51.940 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:51.940 Found net devices under 0000:09:00.0: cvl_0_0 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:51.940 Found net devices under 0000:09:00.1: cvl_0_1 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:51.940 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:51.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:13:51.941 00:13:51.941 --- 10.0.0.2 ping statistics --- 00:13:51.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.941 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:13:51.941 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:51.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:13:51.941 00:13:51.941 --- 10.0.0.1 ping statistics --- 00:13:51.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.941 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:13:51.941 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.941 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:51.941 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:51.941 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.941 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:51.941 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:51.941 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.941 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:51.941 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:51.941 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:51.941 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:51.941 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:51.941 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.941 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1250945 00:13:51.941 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:51.941 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1250945 00:13:51.941 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 1250945 ']' 00:13:51.941 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.941 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:51.941 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.941 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:51.941 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.941 [2024-05-15 15:32:04.885583] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:51.941 [2024-05-15 15:32:04.885680] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.941 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.941 [2024-05-15 15:32:04.927875] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:51.941 [2024-05-15 15:32:04.959338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:52.198 [2024-05-15 15:32:05.045527] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.198 [2024-05-15 15:32:05.045581] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.198 [2024-05-15 15:32:05.045608] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.198 [2024-05-15 15:32:05.045622] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.198 [2024-05-15 15:32:05.045634] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.198 [2024-05-15 15:32:05.045688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.198 [2024-05-15 15:32:05.045751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:52.198 [2024-05-15 15:32:05.045758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.198 15:32:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:52.198 15:32:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:13:52.198 15:32:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:52.198 15:32:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:52.198 15:32:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.198 15:32:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.198 15:32:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:52.198 15:32:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:52.455 [2024-05-15 15:32:05.410096] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.455 15:32:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:52.712 15:32:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.969 [2024-05-15 15:32:05.997137] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:52.969 [2024-05-15 15:32:05.997412] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.969 15:32:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:53.227 15:32:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:53.484 Malloc0 00:13:53.484 15:32:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:53.741 Delay0 00:13:53.741 15:32:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.998 15:32:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:54.255 NULL1 00:13:54.255 15:32:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:54.512 15:32:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1251313 00:13:54.512 15:32:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:54.512 15:32:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:13:54.512 15:32:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.512 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.882 Read completed with error (sct=0, sc=11) 00:13:55.882 15:32:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.882 15:32:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:55.882 15:32:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:56.140 true 00:13:56.140 15:32:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:13:56.140 15:32:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.106 15:32:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.363 15:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:57.363 15:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:57.363 true 00:13:57.620 15:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:13:57.620 15:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.620 15:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.877 15:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:57.877 15:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:58.134 true 00:13:58.134 15:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:13:58.134 15:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.084 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.084 15:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.341 15:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:59.341 15:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:59.598 true 00:13:59.598 15:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:13:59.598 15:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.855 15:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.111 15:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:14:00.111 15:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:00.368 true 00:14:00.368 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:00.368 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.298 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.298 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:14:01.298 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:01.555 true 00:14:01.555 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:01.555 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.813 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.070 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:14:02.070 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:02.327 true 00:14:02.327 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:02.327 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.698 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.698 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:14:03.698 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:03.955 true 00:14:03.955 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:03.955 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.212 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.469 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:04.469 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:04.726 true 00:14:04.726 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:04.727 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.658 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.658 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:05.658 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:05.916 true 00:14:05.916 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:05.916 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.173 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.431 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:06.431 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:06.688 true 00:14:06.688 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:06.688 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.619 15:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.876 15:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:07.876 15:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:08.133 true 00:14:08.133 15:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:08.134 15:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.391 15:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.649 15:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:08.649 15:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:08.906 true 00:14:08.906 15:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:08.906 15:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.838 15:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.096 15:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:10.096 15:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:10.096 true 00:14:10.353 15:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:10.353 15:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.610 15:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.610 15:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:10.610 15:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:10.866 true 00:14:10.866 15:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:10.866 15:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.850 15:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.108 15:32:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:12.108 15:32:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:12.365 true 00:14:12.365 15:32:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:12.365 15:32:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.622 15:32:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.880 15:32:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:12.880 15:32:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:13.137 true 00:14:13.137 15:32:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:13.137 15:32:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.700 15:32:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.957 15:32:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:13.957 15:32:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:14.215 true 00:14:14.215 15:32:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:14.215 15:32:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.472 15:32:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.730 15:32:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:14.730 15:32:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:14.987 true 00:14:14.987 15:32:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:14.988 15:32:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.920 15:32:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.177 15:32:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:16.177 15:32:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:16.435 true 00:14:16.435 15:32:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:16.435 15:32:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.692 15:32:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.949 15:32:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:16.949 15:32:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:17.206 true 00:14:17.206 15:32:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:17.206 15:32:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.464 15:32:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.722 15:32:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:17.722 15:32:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:17.979 true 00:14:17.979 15:32:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:17.979 15:32:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.915 15:32:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:19.172 15:32:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:19.172 15:32:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:19.428 true 00:14:19.428 15:32:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:19.428 15:32:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.685 15:32:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:19.943 15:32:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:19.944 15:32:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:20.202 true 00:14:20.202 15:32:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:20.202 15:32:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:21.150 15:32:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.408 15:32:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:21.408 15:32:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:21.665 true 00:14:21.665 15:32:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:21.665 15:32:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.924 15:32:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.182 15:32:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:22.182 15:32:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:22.439 true 00:14:22.439 15:32:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:22.439 15:32:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.699 15:32:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.956 15:32:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:22.956 15:32:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:22.956 true 00:14:22.956 15:32:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:22.956 15:32:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:24.332 15:32:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.332 15:32:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:24.332 15:32:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:24.591 true 00:14:24.591 15:32:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:24.591 15:32:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.849 Initializing NVMe Controllers 00:14:24.849 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:24.849 Controller IO queue size 128, less than required. 00:14:24.849 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:24.849 Controller IO queue size 128, less than required. 00:14:24.849 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:24.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:24.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:24.849 Initialization complete. Launching workers. 00:14:24.849 ======================================================== 00:14:24.849 Latency(us) 00:14:24.849 Device Information : IOPS MiB/s Average min max 00:14:24.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 548.10 0.27 119713.58 3177.52 1012507.47 00:14:24.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10936.07 5.34 11705.20 1761.92 450803.31 00:14:24.849 ======================================================== 00:14:24.849 Total : 11484.17 5.61 16860.10 1761.92 1012507.47 00:14:24.849 00:14:24.849 15:32:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:25.105 15:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:25.105 15:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:25.363 true 00:14:25.363 15:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1251313 00:14:25.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1251313) - No such process 00:14:25.363 15:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1251313 00:14:25.363 15:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.621 15:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:25.878 15:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:25.878 15:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:25.878 15:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:25.878 15:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:25.878 15:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:26.136 null0 00:14:26.136 15:32:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:26.136 15:32:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:26.136 15:32:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:26.446 null1 00:14:26.446 15:32:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:26.446 15:32:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:26.446 15:32:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:26.705 null2 00:14:26.705 15:32:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:26.705 15:32:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:26.706 15:32:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:26.706 null3 00:14:26.964 15:32:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:26.964 15:32:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:26.964 15:32:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:26.964 null4 00:14:26.964 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:26.964 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:26.964 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:27.221 null5 00:14:27.221 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:27.222 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:27.222 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:27.479 null6 00:14:27.479 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:27.479 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:27.479 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:27.737 null7 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:27.737 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:27.738 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:27.738 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:27.738 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:27.738 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:27.738 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:27.738 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.738 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:27.738 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:27.738 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:27.738 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:27.738 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:27.738 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:27.738 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:27.738 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1255356 1255357 1255359 1255361 1255363 1255365 1255367 1255369 00:14:27.738 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.738 15:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:27.995 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.995 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:27.995 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:27.995 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:27.996 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:27.996 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:27.996 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:27.996 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:28.254 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.254 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.254 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:28.254 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.254 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.254 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:28.512 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.512 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.512 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:28.512 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.512 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.512 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:28.512 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.512 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.512 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:28.512 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.512 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.512 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.512 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.512 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:28.512 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:28.512 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.512 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.512 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:28.771 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.771 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:28.771 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:28.771 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:28.771 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:28.771 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:28.771 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:28.771 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:29.029 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.029 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.029 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:29.029 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.029 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.029 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:29.029 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.029 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.029 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:29.029 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.029 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.029 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:29.029 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.029 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.029 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:29.029 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.029 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.029 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:29.029 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.029 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.029 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:29.029 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.029 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.029 15:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:29.286 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.287 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:29.287 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:29.287 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:29.287 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:29.287 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:29.287 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:29.287 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:29.544 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.544 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.544 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:29.544 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.544 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.544 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:29.544 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.544 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.544 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:29.544 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.544 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.544 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:29.544 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.544 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.544 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.544 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:29.544 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.544 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:29.544 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.544 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.544 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.544 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:29.544 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.544 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:29.802 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:29.802 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.802 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:29.802 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:29.802 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:29.802 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:29.802 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:29.802 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:30.059 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.059 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.059 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:30.059 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.059 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.059 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:30.059 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.059 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.059 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:30.059 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.059 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.059 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:30.059 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.059 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.059 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.059 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.059 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:30.059 15:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:30.059 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.059 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.059 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.059 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.059 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:30.059 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:30.317 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.317 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:30.317 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:30.317 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:30.317 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:30.317 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:30.317 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:30.317 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:30.575 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.575 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.575 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:30.575 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.575 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.575 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:30.575 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.575 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.575 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:30.575 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.575 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.575 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:30.575 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.575 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.575 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:30.575 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.575 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.575 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:30.575 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.575 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.575 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:30.575 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.575 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.575 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:30.832 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.832 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:30.832 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:30.832 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:30.832 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:30.832 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:30.832 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:30.832 15:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:31.091 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.091 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.091 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:31.091 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.091 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.091 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:31.091 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.091 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.091 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:31.091 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.091 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.091 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:31.091 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.091 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.091 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:31.091 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.091 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.091 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:31.091 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.091 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.091 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:31.091 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.091 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.091 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:31.349 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:31.349 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:31.349 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:31.349 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.349 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:31.349 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:31.349 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:31.349 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:31.607 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.607 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.607 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.607 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.607 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:31.607 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:31.607 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.607 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.607 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:31.607 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.607 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.607 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:31.607 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.607 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.607 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:31.607 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.607 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.607 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:31.607 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.607 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.607 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:31.607 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.607 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.607 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:31.865 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:31.865 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:31.865 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.865 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:31.865 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:31.865 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:31.865 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:31.865 15:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:32.123 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.123 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.123 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:32.123 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.123 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.123 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:32.123 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.123 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.123 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:32.123 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.123 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.123 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:32.123 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.123 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.123 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:32.123 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.123 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.123 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.123 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:32.123 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.123 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:32.123 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.123 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.123 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:32.380 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:32.381 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:32.381 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.381 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:32.381 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:32.381 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:32.381 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:32.381 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:32.638 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.638 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.638 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:32.638 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.638 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.638 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:32.639 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.639 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.639 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:32.639 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.639 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.639 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:32.639 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.639 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.639 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.639 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:32.639 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.639 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:32.896 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.896 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.896 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.896 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.896 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:32.896 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:32.896 15:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:33.154 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:33.154 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:33.154 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.154 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:33.154 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:33.154 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:33.154 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:33.411 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.411 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.411 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.411 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.411 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.411 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.411 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.411 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:33.412 rmmod nvme_tcp 00:14:33.412 rmmod nvme_fabrics 00:14:33.412 rmmod nvme_keyring 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1250945 ']' 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1250945 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 1250945 ']' 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 1250945 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1250945 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1250945' 00:14:33.412 killing process with pid 1250945 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 1250945 00:14:33.412 [2024-05-15 15:32:46.369357] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:33.412 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 1250945 00:14:33.670 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:33.670 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:33.670 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:33.670 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:33.670 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:33.670 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.670 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.670 15:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.570 15:32:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:35.570 00:14:35.570 real 0m46.614s 00:14:35.570 user 3m29.987s 00:14:35.570 sys 0m16.144s 00:14:35.570 15:32:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:35.570 15:32:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.570 ************************************ 00:14:35.570 END TEST nvmf_ns_hotplug_stress 00:14:35.570 ************************************ 00:14:35.827 15:32:48 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:35.827 15:32:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:35.827 15:32:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:35.827 15:32:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:35.827 ************************************ 00:14:35.827 START TEST nvmf_connect_stress 00:14:35.827 ************************************ 00:14:35.827 15:32:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:35.827 * Looking for test storage... 00:14:35.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.827 15:32:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.827 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:35.827 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.827 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.827 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.827 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.827 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:35.828 15:32:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:38.357 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:38.357 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:38.357 Found net devices under 0000:09:00.0: cvl_0_0 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:38.357 Found net devices under 0000:09:00.1: cvl_0_1 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:38.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:14:38.357 00:14:38.357 --- 10.0.0.2 ping statistics --- 00:14:38.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.357 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:14:38.357 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:38.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:14:38.357 00:14:38.357 --- 10.0.0.1 ping statistics --- 00:14:38.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.357 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:14:38.358 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.358 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:38.358 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:38.358 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.358 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:38.358 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:38.358 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.358 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:38.358 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:38.358 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:38.358 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:38.358 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:38.358 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.358 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1258403 00:14:38.358 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:38.358 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1258403 00:14:38.358 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 1258403 ']' 00:14:38.358 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.358 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:38.358 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.358 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:38.358 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.615 [2024-05-15 15:32:51.459099] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:38.615 [2024-05-15 15:32:51.459186] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.615 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.615 [2024-05-15 15:32:51.503137] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:38.615 [2024-05-15 15:32:51.540860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:38.615 [2024-05-15 15:32:51.626866] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.615 [2024-05-15 15:32:51.626928] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.615 [2024-05-15 15:32:51.626956] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.615 [2024-05-15 15:32:51.626970] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.615 [2024-05-15 15:32:51.626982] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.615 [2024-05-15 15:32:51.627084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.615 [2024-05-15 15:32:51.627196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.615 [2024-05-15 15:32:51.627200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.874 [2024-05-15 15:32:51.756864] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.874 [2024-05-15 15:32:51.773840] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:38.874 [2024-05-15 15:32:51.782374] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.874 NULL1 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1258441 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.874 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.874 15:32:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.131 15:32:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.131 15:32:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:39.131 15:32:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.131 15:32:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.131 15:32:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.388 15:32:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.388 15:32:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:39.388 15:32:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.388 15:32:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.388 15:32:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.951 15:32:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.951 15:32:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:39.951 15:32:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.951 15:32:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.951 15:32:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.208 15:32:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.208 15:32:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:40.208 15:32:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.208 15:32:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.208 15:32:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.476 15:32:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.476 15:32:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:40.476 15:32:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.476 15:32:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.476 15:32:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.741 15:32:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.741 15:32:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:40.741 15:32:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.741 15:32:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.741 15:32:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.998 15:32:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.998 15:32:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:40.998 15:32:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.998 15:32:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.998 15:32:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.595 15:32:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.595 15:32:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:41.595 15:32:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.595 15:32:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.595 15:32:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.852 15:32:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.852 15:32:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:41.852 15:32:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.852 15:32:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.852 15:32:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.109 15:32:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.109 15:32:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:42.109 15:32:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.109 15:32:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.109 15:32:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.366 15:32:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.366 15:32:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:42.366 15:32:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.366 15:32:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.366 15:32:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.623 15:32:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.623 15:32:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:42.623 15:32:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.623 15:32:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.623 15:32:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.187 15:32:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.187 15:32:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:43.187 15:32:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.188 15:32:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.188 15:32:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.445 15:32:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.445 15:32:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:43.445 15:32:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.445 15:32:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.445 15:32:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.702 15:32:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.702 15:32:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:43.702 15:32:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.702 15:32:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.702 15:32:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.959 15:32:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.960 15:32:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:43.960 15:32:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.960 15:32:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.960 15:32:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.216 15:32:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.216 15:32:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:44.216 15:32:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.216 15:32:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.216 15:32:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.781 15:32:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.781 15:32:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:44.781 15:32:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.781 15:32:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.781 15:32:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.038 15:32:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.038 15:32:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:45.038 15:32:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.038 15:32:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.038 15:32:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.295 15:32:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.295 15:32:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:45.295 15:32:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.295 15:32:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.295 15:32:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.552 15:32:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.552 15:32:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:45.552 15:32:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.552 15:32:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.552 15:32:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.808 15:32:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.808 15:32:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:45.808 15:32:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.808 15:32:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.064 15:32:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.320 15:32:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.320 15:32:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:46.320 15:32:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.320 15:32:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.320 15:32:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.577 15:32:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.577 15:32:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:46.577 15:32:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.577 15:32:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.577 15:32:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.833 15:32:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.833 15:32:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:46.833 15:32:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.833 15:32:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.833 15:32:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.395 15:33:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.395 15:33:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:47.395 15:33:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.395 15:33:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.395 15:33:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.652 15:33:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.652 15:33:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:47.652 15:33:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.652 15:33:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.652 15:33:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.908 15:33:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.908 15:33:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:47.908 15:33:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.908 15:33:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.908 15:33:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.164 15:33:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.164 15:33:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:48.164 15:33:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.164 15:33:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.164 15:33:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.422 15:33:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.422 15:33:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:48.422 15:33:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.422 15:33:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.422 15:33:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.987 15:33:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.987 15:33:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:48.987 15:33:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.987 15:33:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.987 15:33:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.987 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1258441 00:14:49.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1258441) - No such process 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1258441 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:49.245 rmmod nvme_tcp 00:14:49.245 rmmod nvme_fabrics 00:14:49.245 rmmod nvme_keyring 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1258403 ']' 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1258403 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 1258403 ']' 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 1258403 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1258403 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1258403' 00:14:49.245 killing process with pid 1258403 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 1258403 00:14:49.245 [2024-05-15 15:33:02.207534] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:49.245 15:33:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 1258403 00:14:49.504 15:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:49.504 15:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:49.504 15:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:49.504 15:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:49.504 15:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:49.504 15:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.505 15:33:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:49.505 15:33:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.402 15:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:51.402 00:14:51.402 real 0m15.772s 00:14:51.402 user 0m38.328s 00:14:51.402 sys 0m6.256s 00:14:51.402 15:33:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:51.402 15:33:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.402 ************************************ 00:14:51.402 END TEST nvmf_connect_stress 00:14:51.402 ************************************ 00:14:51.402 15:33:04 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:51.402 15:33:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:51.402 15:33:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:51.402 15:33:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:51.660 ************************************ 00:14:51.660 START TEST nvmf_fused_ordering 00:14:51.660 ************************************ 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:51.660 * Looking for test storage... 00:14:51.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:51.660 15:33:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:54.190 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:54.190 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:54.191 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:54.191 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:54.191 Found net devices under 0000:09:00.0: cvl_0_0 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:54.191 Found net devices under 0000:09:00.1: cvl_0_1 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:54.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:14:54.191 00:14:54.191 --- 10.0.0.2 ping statistics --- 00:14:54.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.191 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:54.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:14:54.191 00:14:54.191 --- 10.0.0.1 ping statistics --- 00:14:54.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.191 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1262107 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1262107 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 1262107 ']' 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.191 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:54.192 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:54.192 [2024-05-15 15:33:07.263176] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:54.192 [2024-05-15 15:33:07.263277] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.449 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.449 [2024-05-15 15:33:07.310445] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:54.449 [2024-05-15 15:33:07.342516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.449 [2024-05-15 15:33:07.425610] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.449 [2024-05-15 15:33:07.425661] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.449 [2024-05-15 15:33:07.425690] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.449 [2024-05-15 15:33:07.425702] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.449 [2024-05-15 15:33:07.425713] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.449 [2024-05-15 15:33:07.425753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.449 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:54.449 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:14:54.449 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:54.449 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:54.449 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:54.707 [2024-05-15 15:33:07.563447] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:54.707 [2024-05-15 15:33:07.579415] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:54.707 [2024-05-15 15:33:07.579711] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:54.707 NULL1 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.707 15:33:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:54.707 [2024-05-15 15:33:07.624583] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:54.707 [2024-05-15 15:33:07.624632] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1262347 ] 00:14:54.707 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.707 [2024-05-15 15:33:07.667238] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:55.272 Attached to nqn.2016-06.io.spdk:cnode1 00:14:55.272 Namespace ID: 1 size: 1GB 00:14:55.272 fused_ordering(0) 00:14:55.272 fused_ordering(1) 00:14:55.272 fused_ordering(2) 00:14:55.272 fused_ordering(3) 00:14:55.272 fused_ordering(4) 00:14:55.272 fused_ordering(5) 00:14:55.272 fused_ordering(6) 00:14:55.272 fused_ordering(7) 00:14:55.272 fused_ordering(8) 00:14:55.272 fused_ordering(9) 00:14:55.272 fused_ordering(10) 00:14:55.272 fused_ordering(11) 00:14:55.272 fused_ordering(12) 00:14:55.272 fused_ordering(13) 00:14:55.272 fused_ordering(14) 00:14:55.272 fused_ordering(15) 00:14:55.272 fused_ordering(16) 00:14:55.272 fused_ordering(17) 00:14:55.272 fused_ordering(18) 00:14:55.272 fused_ordering(19) 00:14:55.272 fused_ordering(20) 00:14:55.272 fused_ordering(21) 00:14:55.272 fused_ordering(22) 00:14:55.272 fused_ordering(23) 00:14:55.272 fused_ordering(24) 00:14:55.272 fused_ordering(25) 00:14:55.272 fused_ordering(26) 00:14:55.272 fused_ordering(27) 00:14:55.272 fused_ordering(28) 00:14:55.272 fused_ordering(29) 00:14:55.272 fused_ordering(30) 00:14:55.272 fused_ordering(31) 00:14:55.272 fused_ordering(32) 00:14:55.272 fused_ordering(33) 00:14:55.272 fused_ordering(34) 00:14:55.272 fused_ordering(35) 00:14:55.272 fused_ordering(36) 00:14:55.272 fused_ordering(37) 00:14:55.272 fused_ordering(38) 00:14:55.272 fused_ordering(39) 00:14:55.272 fused_ordering(40) 00:14:55.272 fused_ordering(41) 00:14:55.272 fused_ordering(42) 00:14:55.272 fused_ordering(43) 00:14:55.272 fused_ordering(44) 00:14:55.272 fused_ordering(45) 00:14:55.272 fused_ordering(46) 00:14:55.272 fused_ordering(47) 00:14:55.272 fused_ordering(48) 00:14:55.272 fused_ordering(49) 00:14:55.272 fused_ordering(50) 00:14:55.272 fused_ordering(51) 00:14:55.272 fused_ordering(52) 00:14:55.272 fused_ordering(53) 00:14:55.272 fused_ordering(54) 00:14:55.272 fused_ordering(55) 00:14:55.272 fused_ordering(56) 00:14:55.272 fused_ordering(57) 00:14:55.272 fused_ordering(58) 00:14:55.272 fused_ordering(59) 00:14:55.272 fused_ordering(60) 00:14:55.272 fused_ordering(61) 00:14:55.272 fused_ordering(62) 00:14:55.272 fused_ordering(63) 00:14:55.272 fused_ordering(64) 00:14:55.272 fused_ordering(65) 00:14:55.272 fused_ordering(66) 00:14:55.272 fused_ordering(67) 00:14:55.272 fused_ordering(68) 00:14:55.272 fused_ordering(69) 00:14:55.272 fused_ordering(70) 00:14:55.272 fused_ordering(71) 00:14:55.272 fused_ordering(72) 00:14:55.272 fused_ordering(73) 00:14:55.272 fused_ordering(74) 00:14:55.272 fused_ordering(75) 00:14:55.272 fused_ordering(76) 00:14:55.272 fused_ordering(77) 00:14:55.272 fused_ordering(78) 00:14:55.272 fused_ordering(79) 00:14:55.272 fused_ordering(80) 00:14:55.272 fused_ordering(81) 00:14:55.272 fused_ordering(82) 00:14:55.272 fused_ordering(83) 00:14:55.272 fused_ordering(84) 00:14:55.272 fused_ordering(85) 00:14:55.272 fused_ordering(86) 00:14:55.272 fused_ordering(87) 00:14:55.272 fused_ordering(88) 00:14:55.272 fused_ordering(89) 00:14:55.272 fused_ordering(90) 00:14:55.272 fused_ordering(91) 00:14:55.272 fused_ordering(92) 00:14:55.272 fused_ordering(93) 00:14:55.272 fused_ordering(94) 00:14:55.272 fused_ordering(95) 00:14:55.272 fused_ordering(96) 00:14:55.272 fused_ordering(97) 00:14:55.272 fused_ordering(98) 00:14:55.272 fused_ordering(99) 00:14:55.272 fused_ordering(100) 00:14:55.272 fused_ordering(101) 00:14:55.272 fused_ordering(102) 00:14:55.272 fused_ordering(103) 00:14:55.273 fused_ordering(104) 00:14:55.273 fused_ordering(105) 00:14:55.273 fused_ordering(106) 00:14:55.273 fused_ordering(107) 00:14:55.273 fused_ordering(108) 00:14:55.273 fused_ordering(109) 00:14:55.273 fused_ordering(110) 00:14:55.273 fused_ordering(111) 00:14:55.273 fused_ordering(112) 00:14:55.273 fused_ordering(113) 00:14:55.273 fused_ordering(114) 00:14:55.273 fused_ordering(115) 00:14:55.273 fused_ordering(116) 00:14:55.273 fused_ordering(117) 00:14:55.273 fused_ordering(118) 00:14:55.273 fused_ordering(119) 00:14:55.273 fused_ordering(120) 00:14:55.273 fused_ordering(121) 00:14:55.273 fused_ordering(122) 00:14:55.273 fused_ordering(123) 00:14:55.273 fused_ordering(124) 00:14:55.273 fused_ordering(125) 00:14:55.273 fused_ordering(126) 00:14:55.273 fused_ordering(127) 00:14:55.273 fused_ordering(128) 00:14:55.273 fused_ordering(129) 00:14:55.273 fused_ordering(130) 00:14:55.273 fused_ordering(131) 00:14:55.273 fused_ordering(132) 00:14:55.273 fused_ordering(133) 00:14:55.273 fused_ordering(134) 00:14:55.273 fused_ordering(135) 00:14:55.273 fused_ordering(136) 00:14:55.273 fused_ordering(137) 00:14:55.273 fused_ordering(138) 00:14:55.273 fused_ordering(139) 00:14:55.273 fused_ordering(140) 00:14:55.273 fused_ordering(141) 00:14:55.273 fused_ordering(142) 00:14:55.273 fused_ordering(143) 00:14:55.273 fused_ordering(144) 00:14:55.273 fused_ordering(145) 00:14:55.273 fused_ordering(146) 00:14:55.273 fused_ordering(147) 00:14:55.273 fused_ordering(148) 00:14:55.273 fused_ordering(149) 00:14:55.273 fused_ordering(150) 00:14:55.273 fused_ordering(151) 00:14:55.273 fused_ordering(152) 00:14:55.273 fused_ordering(153) 00:14:55.273 fused_ordering(154) 00:14:55.273 fused_ordering(155) 00:14:55.273 fused_ordering(156) 00:14:55.273 fused_ordering(157) 00:14:55.273 fused_ordering(158) 00:14:55.273 fused_ordering(159) 00:14:55.273 fused_ordering(160) 00:14:55.273 fused_ordering(161) 00:14:55.273 fused_ordering(162) 00:14:55.273 fused_ordering(163) 00:14:55.273 fused_ordering(164) 00:14:55.273 fused_ordering(165) 00:14:55.273 fused_ordering(166) 00:14:55.273 fused_ordering(167) 00:14:55.273 fused_ordering(168) 00:14:55.273 fused_ordering(169) 00:14:55.273 fused_ordering(170) 00:14:55.273 fused_ordering(171) 00:14:55.273 fused_ordering(172) 00:14:55.273 fused_ordering(173) 00:14:55.273 fused_ordering(174) 00:14:55.273 fused_ordering(175) 00:14:55.273 fused_ordering(176) 00:14:55.273 fused_ordering(177) 00:14:55.273 fused_ordering(178) 00:14:55.273 fused_ordering(179) 00:14:55.273 fused_ordering(180) 00:14:55.273 fused_ordering(181) 00:14:55.273 fused_ordering(182) 00:14:55.273 fused_ordering(183) 00:14:55.273 fused_ordering(184) 00:14:55.273 fused_ordering(185) 00:14:55.273 fused_ordering(186) 00:14:55.273 fused_ordering(187) 00:14:55.273 fused_ordering(188) 00:14:55.273 fused_ordering(189) 00:14:55.273 fused_ordering(190) 00:14:55.273 fused_ordering(191) 00:14:55.273 fused_ordering(192) 00:14:55.273 fused_ordering(193) 00:14:55.273 fused_ordering(194) 00:14:55.273 fused_ordering(195) 00:14:55.273 fused_ordering(196) 00:14:55.273 fused_ordering(197) 00:14:55.273 fused_ordering(198) 00:14:55.273 fused_ordering(199) 00:14:55.273 fused_ordering(200) 00:14:55.273 fused_ordering(201) 00:14:55.273 fused_ordering(202) 00:14:55.273 fused_ordering(203) 00:14:55.273 fused_ordering(204) 00:14:55.273 fused_ordering(205) 00:14:55.531 fused_ordering(206) 00:14:55.531 fused_ordering(207) 00:14:55.531 fused_ordering(208) 00:14:55.531 fused_ordering(209) 00:14:55.531 fused_ordering(210) 00:14:55.531 fused_ordering(211) 00:14:55.531 fused_ordering(212) 00:14:55.531 fused_ordering(213) 00:14:55.531 fused_ordering(214) 00:14:55.531 fused_ordering(215) 00:14:55.531 fused_ordering(216) 00:14:55.531 fused_ordering(217) 00:14:55.531 fused_ordering(218) 00:14:55.531 fused_ordering(219) 00:14:55.531 fused_ordering(220) 00:14:55.531 fused_ordering(221) 00:14:55.531 fused_ordering(222) 00:14:55.531 fused_ordering(223) 00:14:55.531 fused_ordering(224) 00:14:55.531 fused_ordering(225) 00:14:55.531 fused_ordering(226) 00:14:55.531 fused_ordering(227) 00:14:55.531 fused_ordering(228) 00:14:55.531 fused_ordering(229) 00:14:55.531 fused_ordering(230) 00:14:55.531 fused_ordering(231) 00:14:55.531 fused_ordering(232) 00:14:55.531 fused_ordering(233) 00:14:55.531 fused_ordering(234) 00:14:55.531 fused_ordering(235) 00:14:55.531 fused_ordering(236) 00:14:55.531 fused_ordering(237) 00:14:55.531 fused_ordering(238) 00:14:55.531 fused_ordering(239) 00:14:55.531 fused_ordering(240) 00:14:55.531 fused_ordering(241) 00:14:55.531 fused_ordering(242) 00:14:55.531 fused_ordering(243) 00:14:55.531 fused_ordering(244) 00:14:55.531 fused_ordering(245) 00:14:55.531 fused_ordering(246) 00:14:55.531 fused_ordering(247) 00:14:55.531 fused_ordering(248) 00:14:55.531 fused_ordering(249) 00:14:55.531 fused_ordering(250) 00:14:55.531 fused_ordering(251) 00:14:55.531 fused_ordering(252) 00:14:55.531 fused_ordering(253) 00:14:55.531 fused_ordering(254) 00:14:55.531 fused_ordering(255) 00:14:55.531 fused_ordering(256) 00:14:55.531 fused_ordering(257) 00:14:55.531 fused_ordering(258) 00:14:55.531 fused_ordering(259) 00:14:55.531 fused_ordering(260) 00:14:55.531 fused_ordering(261) 00:14:55.531 fused_ordering(262) 00:14:55.531 fused_ordering(263) 00:14:55.531 fused_ordering(264) 00:14:55.531 fused_ordering(265) 00:14:55.531 fused_ordering(266) 00:14:55.531 fused_ordering(267) 00:14:55.531 fused_ordering(268) 00:14:55.531 fused_ordering(269) 00:14:55.531 fused_ordering(270) 00:14:55.531 fused_ordering(271) 00:14:55.531 fused_ordering(272) 00:14:55.531 fused_ordering(273) 00:14:55.531 fused_ordering(274) 00:14:55.531 fused_ordering(275) 00:14:55.531 fused_ordering(276) 00:14:55.531 fused_ordering(277) 00:14:55.531 fused_ordering(278) 00:14:55.531 fused_ordering(279) 00:14:55.531 fused_ordering(280) 00:14:55.531 fused_ordering(281) 00:14:55.531 fused_ordering(282) 00:14:55.531 fused_ordering(283) 00:14:55.531 fused_ordering(284) 00:14:55.531 fused_ordering(285) 00:14:55.531 fused_ordering(286) 00:14:55.531 fused_ordering(287) 00:14:55.531 fused_ordering(288) 00:14:55.532 fused_ordering(289) 00:14:55.532 fused_ordering(290) 00:14:55.532 fused_ordering(291) 00:14:55.532 fused_ordering(292) 00:14:55.532 fused_ordering(293) 00:14:55.532 fused_ordering(294) 00:14:55.532 fused_ordering(295) 00:14:55.532 fused_ordering(296) 00:14:55.532 fused_ordering(297) 00:14:55.532 fused_ordering(298) 00:14:55.532 fused_ordering(299) 00:14:55.532 fused_ordering(300) 00:14:55.532 fused_ordering(301) 00:14:55.532 fused_ordering(302) 00:14:55.532 fused_ordering(303) 00:14:55.532 fused_ordering(304) 00:14:55.532 fused_ordering(305) 00:14:55.532 fused_ordering(306) 00:14:55.532 fused_ordering(307) 00:14:55.532 fused_ordering(308) 00:14:55.532 fused_ordering(309) 00:14:55.532 fused_ordering(310) 00:14:55.532 fused_ordering(311) 00:14:55.532 fused_ordering(312) 00:14:55.532 fused_ordering(313) 00:14:55.532 fused_ordering(314) 00:14:55.532 fused_ordering(315) 00:14:55.532 fused_ordering(316) 00:14:55.532 fused_ordering(317) 00:14:55.532 fused_ordering(318) 00:14:55.532 fused_ordering(319) 00:14:55.532 fused_ordering(320) 00:14:55.532 fused_ordering(321) 00:14:55.532 fused_ordering(322) 00:14:55.532 fused_ordering(323) 00:14:55.532 fused_ordering(324) 00:14:55.532 fused_ordering(325) 00:14:55.532 fused_ordering(326) 00:14:55.532 fused_ordering(327) 00:14:55.532 fused_ordering(328) 00:14:55.532 fused_ordering(329) 00:14:55.532 fused_ordering(330) 00:14:55.532 fused_ordering(331) 00:14:55.532 fused_ordering(332) 00:14:55.532 fused_ordering(333) 00:14:55.532 fused_ordering(334) 00:14:55.532 fused_ordering(335) 00:14:55.532 fused_ordering(336) 00:14:55.532 fused_ordering(337) 00:14:55.532 fused_ordering(338) 00:14:55.532 fused_ordering(339) 00:14:55.532 fused_ordering(340) 00:14:55.532 fused_ordering(341) 00:14:55.532 fused_ordering(342) 00:14:55.532 fused_ordering(343) 00:14:55.532 fused_ordering(344) 00:14:55.532 fused_ordering(345) 00:14:55.532 fused_ordering(346) 00:14:55.532 fused_ordering(347) 00:14:55.532 fused_ordering(348) 00:14:55.532 fused_ordering(349) 00:14:55.532 fused_ordering(350) 00:14:55.532 fused_ordering(351) 00:14:55.532 fused_ordering(352) 00:14:55.532 fused_ordering(353) 00:14:55.532 fused_ordering(354) 00:14:55.532 fused_ordering(355) 00:14:55.532 fused_ordering(356) 00:14:55.532 fused_ordering(357) 00:14:55.532 fused_ordering(358) 00:14:55.532 fused_ordering(359) 00:14:55.532 fused_ordering(360) 00:14:55.532 fused_ordering(361) 00:14:55.532 fused_ordering(362) 00:14:55.532 fused_ordering(363) 00:14:55.532 fused_ordering(364) 00:14:55.532 fused_ordering(365) 00:14:55.532 fused_ordering(366) 00:14:55.532 fused_ordering(367) 00:14:55.532 fused_ordering(368) 00:14:55.532 fused_ordering(369) 00:14:55.532 fused_ordering(370) 00:14:55.532 fused_ordering(371) 00:14:55.532 fused_ordering(372) 00:14:55.532 fused_ordering(373) 00:14:55.532 fused_ordering(374) 00:14:55.532 fused_ordering(375) 00:14:55.532 fused_ordering(376) 00:14:55.532 fused_ordering(377) 00:14:55.532 fused_ordering(378) 00:14:55.532 fused_ordering(379) 00:14:55.532 fused_ordering(380) 00:14:55.532 fused_ordering(381) 00:14:55.532 fused_ordering(382) 00:14:55.532 fused_ordering(383) 00:14:55.532 fused_ordering(384) 00:14:55.532 fused_ordering(385) 00:14:55.532 fused_ordering(386) 00:14:55.532 fused_ordering(387) 00:14:55.532 fused_ordering(388) 00:14:55.532 fused_ordering(389) 00:14:55.532 fused_ordering(390) 00:14:55.532 fused_ordering(391) 00:14:55.532 fused_ordering(392) 00:14:55.532 fused_ordering(393) 00:14:55.532 fused_ordering(394) 00:14:55.532 fused_ordering(395) 00:14:55.532 fused_ordering(396) 00:14:55.532 fused_ordering(397) 00:14:55.532 fused_ordering(398) 00:14:55.532 fused_ordering(399) 00:14:55.532 fused_ordering(400) 00:14:55.532 fused_ordering(401) 00:14:55.532 fused_ordering(402) 00:14:55.532 fused_ordering(403) 00:14:55.532 fused_ordering(404) 00:14:55.532 fused_ordering(405) 00:14:55.532 fused_ordering(406) 00:14:55.532 fused_ordering(407) 00:14:55.532 fused_ordering(408) 00:14:55.532 fused_ordering(409) 00:14:55.532 fused_ordering(410) 00:14:56.097 fused_ordering(411) 00:14:56.097 fused_ordering(412) 00:14:56.097 fused_ordering(413) 00:14:56.097 fused_ordering(414) 00:14:56.097 fused_ordering(415) 00:14:56.097 fused_ordering(416) 00:14:56.097 fused_ordering(417) 00:14:56.097 fused_ordering(418) 00:14:56.097 fused_ordering(419) 00:14:56.097 fused_ordering(420) 00:14:56.097 fused_ordering(421) 00:14:56.097 fused_ordering(422) 00:14:56.097 fused_ordering(423) 00:14:56.097 fused_ordering(424) 00:14:56.097 fused_ordering(425) 00:14:56.097 fused_ordering(426) 00:14:56.097 fused_ordering(427) 00:14:56.097 fused_ordering(428) 00:14:56.097 fused_ordering(429) 00:14:56.097 fused_ordering(430) 00:14:56.097 fused_ordering(431) 00:14:56.097 fused_ordering(432) 00:14:56.097 fused_ordering(433) 00:14:56.097 fused_ordering(434) 00:14:56.097 fused_ordering(435) 00:14:56.097 fused_ordering(436) 00:14:56.097 fused_ordering(437) 00:14:56.097 fused_ordering(438) 00:14:56.097 fused_ordering(439) 00:14:56.097 fused_ordering(440) 00:14:56.097 fused_ordering(441) 00:14:56.097 fused_ordering(442) 00:14:56.097 fused_ordering(443) 00:14:56.097 fused_ordering(444) 00:14:56.097 fused_ordering(445) 00:14:56.097 fused_ordering(446) 00:14:56.097 fused_ordering(447) 00:14:56.097 fused_ordering(448) 00:14:56.097 fused_ordering(449) 00:14:56.097 fused_ordering(450) 00:14:56.097 fused_ordering(451) 00:14:56.097 fused_ordering(452) 00:14:56.097 fused_ordering(453) 00:14:56.097 fused_ordering(454) 00:14:56.097 fused_ordering(455) 00:14:56.097 fused_ordering(456) 00:14:56.097 fused_ordering(457) 00:14:56.097 fused_ordering(458) 00:14:56.097 fused_ordering(459) 00:14:56.097 fused_ordering(460) 00:14:56.097 fused_ordering(461) 00:14:56.097 fused_ordering(462) 00:14:56.097 fused_ordering(463) 00:14:56.097 fused_ordering(464) 00:14:56.097 fused_ordering(465) 00:14:56.097 fused_ordering(466) 00:14:56.097 fused_ordering(467) 00:14:56.097 fused_ordering(468) 00:14:56.097 fused_ordering(469) 00:14:56.097 fused_ordering(470) 00:14:56.097 fused_ordering(471) 00:14:56.097 fused_ordering(472) 00:14:56.097 fused_ordering(473) 00:14:56.097 fused_ordering(474) 00:14:56.097 fused_ordering(475) 00:14:56.097 fused_ordering(476) 00:14:56.097 fused_ordering(477) 00:14:56.097 fused_ordering(478) 00:14:56.097 fused_ordering(479) 00:14:56.097 fused_ordering(480) 00:14:56.097 fused_ordering(481) 00:14:56.097 fused_ordering(482) 00:14:56.097 fused_ordering(483) 00:14:56.097 fused_ordering(484) 00:14:56.097 fused_ordering(485) 00:14:56.097 fused_ordering(486) 00:14:56.097 fused_ordering(487) 00:14:56.097 fused_ordering(488) 00:14:56.097 fused_ordering(489) 00:14:56.097 fused_ordering(490) 00:14:56.097 fused_ordering(491) 00:14:56.097 fused_ordering(492) 00:14:56.097 fused_ordering(493) 00:14:56.097 fused_ordering(494) 00:14:56.097 fused_ordering(495) 00:14:56.097 fused_ordering(496) 00:14:56.097 fused_ordering(497) 00:14:56.097 fused_ordering(498) 00:14:56.097 fused_ordering(499) 00:14:56.097 fused_ordering(500) 00:14:56.097 fused_ordering(501) 00:14:56.097 fused_ordering(502) 00:14:56.097 fused_ordering(503) 00:14:56.097 fused_ordering(504) 00:14:56.097 fused_ordering(505) 00:14:56.097 fused_ordering(506) 00:14:56.097 fused_ordering(507) 00:14:56.097 fused_ordering(508) 00:14:56.097 fused_ordering(509) 00:14:56.097 fused_ordering(510) 00:14:56.097 fused_ordering(511) 00:14:56.097 fused_ordering(512) 00:14:56.097 fused_ordering(513) 00:14:56.097 fused_ordering(514) 00:14:56.097 fused_ordering(515) 00:14:56.097 fused_ordering(516) 00:14:56.097 fused_ordering(517) 00:14:56.097 fused_ordering(518) 00:14:56.097 fused_ordering(519) 00:14:56.097 fused_ordering(520) 00:14:56.097 fused_ordering(521) 00:14:56.097 fused_ordering(522) 00:14:56.097 fused_ordering(523) 00:14:56.097 fused_ordering(524) 00:14:56.097 fused_ordering(525) 00:14:56.097 fused_ordering(526) 00:14:56.097 fused_ordering(527) 00:14:56.097 fused_ordering(528) 00:14:56.097 fused_ordering(529) 00:14:56.097 fused_ordering(530) 00:14:56.097 fused_ordering(531) 00:14:56.097 fused_ordering(532) 00:14:56.097 fused_ordering(533) 00:14:56.097 fused_ordering(534) 00:14:56.097 fused_ordering(535) 00:14:56.097 fused_ordering(536) 00:14:56.097 fused_ordering(537) 00:14:56.097 fused_ordering(538) 00:14:56.097 fused_ordering(539) 00:14:56.097 fused_ordering(540) 00:14:56.097 fused_ordering(541) 00:14:56.097 fused_ordering(542) 00:14:56.097 fused_ordering(543) 00:14:56.097 fused_ordering(544) 00:14:56.097 fused_ordering(545) 00:14:56.097 fused_ordering(546) 00:14:56.097 fused_ordering(547) 00:14:56.097 fused_ordering(548) 00:14:56.097 fused_ordering(549) 00:14:56.097 fused_ordering(550) 00:14:56.097 fused_ordering(551) 00:14:56.097 fused_ordering(552) 00:14:56.097 fused_ordering(553) 00:14:56.097 fused_ordering(554) 00:14:56.097 fused_ordering(555) 00:14:56.097 fused_ordering(556) 00:14:56.097 fused_ordering(557) 00:14:56.097 fused_ordering(558) 00:14:56.097 fused_ordering(559) 00:14:56.097 fused_ordering(560) 00:14:56.097 fused_ordering(561) 00:14:56.097 fused_ordering(562) 00:14:56.097 fused_ordering(563) 00:14:56.097 fused_ordering(564) 00:14:56.097 fused_ordering(565) 00:14:56.097 fused_ordering(566) 00:14:56.097 fused_ordering(567) 00:14:56.097 fused_ordering(568) 00:14:56.097 fused_ordering(569) 00:14:56.097 fused_ordering(570) 00:14:56.097 fused_ordering(571) 00:14:56.097 fused_ordering(572) 00:14:56.097 fused_ordering(573) 00:14:56.097 fused_ordering(574) 00:14:56.097 fused_ordering(575) 00:14:56.097 fused_ordering(576) 00:14:56.097 fused_ordering(577) 00:14:56.097 fused_ordering(578) 00:14:56.097 fused_ordering(579) 00:14:56.097 fused_ordering(580) 00:14:56.097 fused_ordering(581) 00:14:56.097 fused_ordering(582) 00:14:56.097 fused_ordering(583) 00:14:56.097 fused_ordering(584) 00:14:56.097 fused_ordering(585) 00:14:56.097 fused_ordering(586) 00:14:56.097 fused_ordering(587) 00:14:56.097 fused_ordering(588) 00:14:56.097 fused_ordering(589) 00:14:56.098 fused_ordering(590) 00:14:56.098 fused_ordering(591) 00:14:56.098 fused_ordering(592) 00:14:56.098 fused_ordering(593) 00:14:56.098 fused_ordering(594) 00:14:56.098 fused_ordering(595) 00:14:56.098 fused_ordering(596) 00:14:56.098 fused_ordering(597) 00:14:56.098 fused_ordering(598) 00:14:56.098 fused_ordering(599) 00:14:56.098 fused_ordering(600) 00:14:56.098 fused_ordering(601) 00:14:56.098 fused_ordering(602) 00:14:56.098 fused_ordering(603) 00:14:56.098 fused_ordering(604) 00:14:56.098 fused_ordering(605) 00:14:56.098 fused_ordering(606) 00:14:56.098 fused_ordering(607) 00:14:56.098 fused_ordering(608) 00:14:56.098 fused_ordering(609) 00:14:56.098 fused_ordering(610) 00:14:56.098 fused_ordering(611) 00:14:56.098 fused_ordering(612) 00:14:56.098 fused_ordering(613) 00:14:56.098 fused_ordering(614) 00:14:56.098 fused_ordering(615) 00:14:56.664 fused_ordering(616) 00:14:56.664 fused_ordering(617) 00:14:56.664 fused_ordering(618) 00:14:56.664 fused_ordering(619) 00:14:56.664 fused_ordering(620) 00:14:56.664 fused_ordering(621) 00:14:56.664 fused_ordering(622) 00:14:56.664 fused_ordering(623) 00:14:56.664 fused_ordering(624) 00:14:56.664 fused_ordering(625) 00:14:56.664 fused_ordering(626) 00:14:56.664 fused_ordering(627) 00:14:56.664 fused_ordering(628) 00:14:56.664 fused_ordering(629) 00:14:56.664 fused_ordering(630) 00:14:56.664 fused_ordering(631) 00:14:56.664 fused_ordering(632) 00:14:56.664 fused_ordering(633) 00:14:56.664 fused_ordering(634) 00:14:56.664 fused_ordering(635) 00:14:56.664 fused_ordering(636) 00:14:56.664 fused_ordering(637) 00:14:56.664 fused_ordering(638) 00:14:56.664 fused_ordering(639) 00:14:56.664 fused_ordering(640) 00:14:56.664 fused_ordering(641) 00:14:56.664 fused_ordering(642) 00:14:56.664 fused_ordering(643) 00:14:56.664 fused_ordering(644) 00:14:56.664 fused_ordering(645) 00:14:56.664 fused_ordering(646) 00:14:56.664 fused_ordering(647) 00:14:56.664 fused_ordering(648) 00:14:56.664 fused_ordering(649) 00:14:56.664 fused_ordering(650) 00:14:56.664 fused_ordering(651) 00:14:56.664 fused_ordering(652) 00:14:56.664 fused_ordering(653) 00:14:56.664 fused_ordering(654) 00:14:56.664 fused_ordering(655) 00:14:56.664 fused_ordering(656) 00:14:56.664 fused_ordering(657) 00:14:56.664 fused_ordering(658) 00:14:56.664 fused_ordering(659) 00:14:56.664 fused_ordering(660) 00:14:56.664 fused_ordering(661) 00:14:56.664 fused_ordering(662) 00:14:56.664 fused_ordering(663) 00:14:56.664 fused_ordering(664) 00:14:56.664 fused_ordering(665) 00:14:56.664 fused_ordering(666) 00:14:56.664 fused_ordering(667) 00:14:56.664 fused_ordering(668) 00:14:56.664 fused_ordering(669) 00:14:56.664 fused_ordering(670) 00:14:56.664 fused_ordering(671) 00:14:56.664 fused_ordering(672) 00:14:56.664 fused_ordering(673) 00:14:56.664 fused_ordering(674) 00:14:56.664 fused_ordering(675) 00:14:56.664 fused_ordering(676) 00:14:56.664 fused_ordering(677) 00:14:56.664 fused_ordering(678) 00:14:56.664 fused_ordering(679) 00:14:56.664 fused_ordering(680) 00:14:56.664 fused_ordering(681) 00:14:56.664 fused_ordering(682) 00:14:56.664 fused_ordering(683) 00:14:56.664 fused_ordering(684) 00:14:56.664 fused_ordering(685) 00:14:56.664 fused_ordering(686) 00:14:56.664 fused_ordering(687) 00:14:56.664 fused_ordering(688) 00:14:56.664 fused_ordering(689) 00:14:56.664 fused_ordering(690) 00:14:56.664 fused_ordering(691) 00:14:56.664 fused_ordering(692) 00:14:56.664 fused_ordering(693) 00:14:56.665 fused_ordering(694) 00:14:56.665 fused_ordering(695) 00:14:56.665 fused_ordering(696) 00:14:56.665 fused_ordering(697) 00:14:56.665 fused_ordering(698) 00:14:56.665 fused_ordering(699) 00:14:56.665 fused_ordering(700) 00:14:56.665 fused_ordering(701) 00:14:56.665 fused_ordering(702) 00:14:56.665 fused_ordering(703) 00:14:56.665 fused_ordering(704) 00:14:56.665 fused_ordering(705) 00:14:56.665 fused_ordering(706) 00:14:56.665 fused_ordering(707) 00:14:56.665 fused_ordering(708) 00:14:56.665 fused_ordering(709) 00:14:56.665 fused_ordering(710) 00:14:56.665 fused_ordering(711) 00:14:56.665 fused_ordering(712) 00:14:56.665 fused_ordering(713) 00:14:56.665 fused_ordering(714) 00:14:56.665 fused_ordering(715) 00:14:56.665 fused_ordering(716) 00:14:56.665 fused_ordering(717) 00:14:56.665 fused_ordering(718) 00:14:56.665 fused_ordering(719) 00:14:56.665 fused_ordering(720) 00:14:56.665 fused_ordering(721) 00:14:56.665 fused_ordering(722) 00:14:56.665 fused_ordering(723) 00:14:56.665 fused_ordering(724) 00:14:56.665 fused_ordering(725) 00:14:56.665 fused_ordering(726) 00:14:56.665 fused_ordering(727) 00:14:56.665 fused_ordering(728) 00:14:56.665 fused_ordering(729) 00:14:56.665 fused_ordering(730) 00:14:56.665 fused_ordering(731) 00:14:56.665 fused_ordering(732) 00:14:56.665 fused_ordering(733) 00:14:56.665 fused_ordering(734) 00:14:56.665 fused_ordering(735) 00:14:56.665 fused_ordering(736) 00:14:56.665 fused_ordering(737) 00:14:56.665 fused_ordering(738) 00:14:56.665 fused_ordering(739) 00:14:56.665 fused_ordering(740) 00:14:56.665 fused_ordering(741) 00:14:56.665 fused_ordering(742) 00:14:56.665 fused_ordering(743) 00:14:56.665 fused_ordering(744) 00:14:56.665 fused_ordering(745) 00:14:56.665 fused_ordering(746) 00:14:56.665 fused_ordering(747) 00:14:56.665 fused_ordering(748) 00:14:56.665 fused_ordering(749) 00:14:56.665 fused_ordering(750) 00:14:56.665 fused_ordering(751) 00:14:56.665 fused_ordering(752) 00:14:56.665 fused_ordering(753) 00:14:56.665 fused_ordering(754) 00:14:56.665 fused_ordering(755) 00:14:56.665 fused_ordering(756) 00:14:56.665 fused_ordering(757) 00:14:56.665 fused_ordering(758) 00:14:56.665 fused_ordering(759) 00:14:56.665 fused_ordering(760) 00:14:56.665 fused_ordering(761) 00:14:56.665 fused_ordering(762) 00:14:56.665 fused_ordering(763) 00:14:56.665 fused_ordering(764) 00:14:56.665 fused_ordering(765) 00:14:56.665 fused_ordering(766) 00:14:56.665 fused_ordering(767) 00:14:56.665 fused_ordering(768) 00:14:56.665 fused_ordering(769) 00:14:56.665 fused_ordering(770) 00:14:56.665 fused_ordering(771) 00:14:56.665 fused_ordering(772) 00:14:56.665 fused_ordering(773) 00:14:56.665 fused_ordering(774) 00:14:56.665 fused_ordering(775) 00:14:56.665 fused_ordering(776) 00:14:56.665 fused_ordering(777) 00:14:56.665 fused_ordering(778) 00:14:56.665 fused_ordering(779) 00:14:56.665 fused_ordering(780) 00:14:56.665 fused_ordering(781) 00:14:56.665 fused_ordering(782) 00:14:56.665 fused_ordering(783) 00:14:56.665 fused_ordering(784) 00:14:56.665 fused_ordering(785) 00:14:56.665 fused_ordering(786) 00:14:56.665 fused_ordering(787) 00:14:56.665 fused_ordering(788) 00:14:56.665 fused_ordering(789) 00:14:56.665 fused_ordering(790) 00:14:56.665 fused_ordering(791) 00:14:56.665 fused_ordering(792) 00:14:56.665 fused_ordering(793) 00:14:56.665 fused_ordering(794) 00:14:56.665 fused_ordering(795) 00:14:56.665 fused_ordering(796) 00:14:56.665 fused_ordering(797) 00:14:56.665 fused_ordering(798) 00:14:56.665 fused_ordering(799) 00:14:56.665 fused_ordering(800) 00:14:56.665 fused_ordering(801) 00:14:56.665 fused_ordering(802) 00:14:56.665 fused_ordering(803) 00:14:56.665 fused_ordering(804) 00:14:56.665 fused_ordering(805) 00:14:56.665 fused_ordering(806) 00:14:56.665 fused_ordering(807) 00:14:56.665 fused_ordering(808) 00:14:56.665 fused_ordering(809) 00:14:56.665 fused_ordering(810) 00:14:56.665 fused_ordering(811) 00:14:56.665 fused_ordering(812) 00:14:56.665 fused_ordering(813) 00:14:56.665 fused_ordering(814) 00:14:56.665 fused_ordering(815) 00:14:56.665 fused_ordering(816) 00:14:56.665 fused_ordering(817) 00:14:56.665 fused_ordering(818) 00:14:56.665 fused_ordering(819) 00:14:56.665 fused_ordering(820) 00:14:57.652 fused_ordering(821) 00:14:57.652 fused_ordering(822) 00:14:57.652 fused_ordering(823) 00:14:57.652 fused_ordering(824) 00:14:57.652 fused_ordering(825) 00:14:57.652 fused_ordering(826) 00:14:57.652 fused_ordering(827) 00:14:57.652 fused_ordering(828) 00:14:57.652 fused_ordering(829) 00:14:57.652 fused_ordering(830) 00:14:57.652 fused_ordering(831) 00:14:57.652 fused_ordering(832) 00:14:57.652 fused_ordering(833) 00:14:57.652 fused_ordering(834) 00:14:57.652 fused_ordering(835) 00:14:57.652 fused_ordering(836) 00:14:57.652 fused_ordering(837) 00:14:57.652 fused_ordering(838) 00:14:57.652 fused_ordering(839) 00:14:57.652 fused_ordering(840) 00:14:57.652 fused_ordering(841) 00:14:57.652 fused_ordering(842) 00:14:57.652 fused_ordering(843) 00:14:57.652 fused_ordering(844) 00:14:57.652 fused_ordering(845) 00:14:57.652 fused_ordering(846) 00:14:57.652 fused_ordering(847) 00:14:57.652 fused_ordering(848) 00:14:57.653 fused_ordering(849) 00:14:57.653 fused_ordering(850) 00:14:57.653 fused_ordering(851) 00:14:57.653 fused_ordering(852) 00:14:57.653 fused_ordering(853) 00:14:57.653 fused_ordering(854) 00:14:57.653 fused_ordering(855) 00:14:57.653 fused_ordering(856) 00:14:57.653 fused_ordering(857) 00:14:57.653 fused_ordering(858) 00:14:57.653 fused_ordering(859) 00:14:57.653 fused_ordering(860) 00:14:57.653 fused_ordering(861) 00:14:57.653 fused_ordering(862) 00:14:57.653 fused_ordering(863) 00:14:57.653 fused_ordering(864) 00:14:57.653 fused_ordering(865) 00:14:57.653 fused_ordering(866) 00:14:57.653 fused_ordering(867) 00:14:57.653 fused_ordering(868) 00:14:57.653 fused_ordering(869) 00:14:57.653 fused_ordering(870) 00:14:57.653 fused_ordering(871) 00:14:57.653 fused_ordering(872) 00:14:57.653 fused_ordering(873) 00:14:57.653 fused_ordering(874) 00:14:57.653 fused_ordering(875) 00:14:57.653 fused_ordering(876) 00:14:57.653 fused_ordering(877) 00:14:57.653 fused_ordering(878) 00:14:57.653 fused_ordering(879) 00:14:57.653 fused_ordering(880) 00:14:57.653 fused_ordering(881) 00:14:57.653 fused_ordering(882) 00:14:57.653 fused_ordering(883) 00:14:57.653 fused_ordering(884) 00:14:57.653 fused_ordering(885) 00:14:57.653 fused_ordering(886) 00:14:57.653 fused_ordering(887) 00:14:57.653 fused_ordering(888) 00:14:57.653 fused_ordering(889) 00:14:57.653 fused_ordering(890) 00:14:57.653 fused_ordering(891) 00:14:57.653 fused_ordering(892) 00:14:57.653 fused_ordering(893) 00:14:57.653 fused_ordering(894) 00:14:57.653 fused_ordering(895) 00:14:57.653 fused_ordering(896) 00:14:57.653 fused_ordering(897) 00:14:57.653 fused_ordering(898) 00:14:57.653 fused_ordering(899) 00:14:57.653 fused_ordering(900) 00:14:57.653 fused_ordering(901) 00:14:57.653 fused_ordering(902) 00:14:57.653 fused_ordering(903) 00:14:57.653 fused_ordering(904) 00:14:57.653 fused_ordering(905) 00:14:57.653 fused_ordering(906) 00:14:57.653 fused_ordering(907) 00:14:57.653 fused_ordering(908) 00:14:57.653 fused_ordering(909) 00:14:57.653 fused_ordering(910) 00:14:57.653 fused_ordering(911) 00:14:57.653 fused_ordering(912) 00:14:57.653 fused_ordering(913) 00:14:57.653 fused_ordering(914) 00:14:57.653 fused_ordering(915) 00:14:57.653 fused_ordering(916) 00:14:57.653 fused_ordering(917) 00:14:57.653 fused_ordering(918) 00:14:57.653 fused_ordering(919) 00:14:57.653 fused_ordering(920) 00:14:57.653 fused_ordering(921) 00:14:57.653 fused_ordering(922) 00:14:57.653 fused_ordering(923) 00:14:57.653 fused_ordering(924) 00:14:57.653 fused_ordering(925) 00:14:57.653 fused_ordering(926) 00:14:57.653 fused_ordering(927) 00:14:57.653 fused_ordering(928) 00:14:57.653 fused_ordering(929) 00:14:57.653 fused_ordering(930) 00:14:57.653 fused_ordering(931) 00:14:57.653 fused_ordering(932) 00:14:57.653 fused_ordering(933) 00:14:57.653 fused_ordering(934) 00:14:57.653 fused_ordering(935) 00:14:57.653 fused_ordering(936) 00:14:57.653 fused_ordering(937) 00:14:57.653 fused_ordering(938) 00:14:57.653 fused_ordering(939) 00:14:57.653 fused_ordering(940) 00:14:57.653 fused_ordering(941) 00:14:57.653 fused_ordering(942) 00:14:57.653 fused_ordering(943) 00:14:57.653 fused_ordering(944) 00:14:57.653 fused_ordering(945) 00:14:57.653 fused_ordering(946) 00:14:57.653 fused_ordering(947) 00:14:57.653 fused_ordering(948) 00:14:57.653 fused_ordering(949) 00:14:57.653 fused_ordering(950) 00:14:57.653 fused_ordering(951) 00:14:57.653 fused_ordering(952) 00:14:57.653 fused_ordering(953) 00:14:57.653 fused_ordering(954) 00:14:57.653 fused_ordering(955) 00:14:57.653 fused_ordering(956) 00:14:57.653 fused_ordering(957) 00:14:57.653 fused_ordering(958) 00:14:57.653 fused_ordering(959) 00:14:57.653 fused_ordering(960) 00:14:57.653 fused_ordering(961) 00:14:57.653 fused_ordering(962) 00:14:57.653 fused_ordering(963) 00:14:57.653 fused_ordering(964) 00:14:57.653 fused_ordering(965) 00:14:57.653 fused_ordering(966) 00:14:57.653 fused_ordering(967) 00:14:57.653 fused_ordering(968) 00:14:57.653 fused_ordering(969) 00:14:57.653 fused_ordering(970) 00:14:57.653 fused_ordering(971) 00:14:57.653 fused_ordering(972) 00:14:57.653 fused_ordering(973) 00:14:57.653 fused_ordering(974) 00:14:57.653 fused_ordering(975) 00:14:57.653 fused_ordering(976) 00:14:57.653 fused_ordering(977) 00:14:57.653 fused_ordering(978) 00:14:57.653 fused_ordering(979) 00:14:57.653 fused_ordering(980) 00:14:57.653 fused_ordering(981) 00:14:57.653 fused_ordering(982) 00:14:57.653 fused_ordering(983) 00:14:57.653 fused_ordering(984) 00:14:57.653 fused_ordering(985) 00:14:57.653 fused_ordering(986) 00:14:57.653 fused_ordering(987) 00:14:57.653 fused_ordering(988) 00:14:57.653 fused_ordering(989) 00:14:57.653 fused_ordering(990) 00:14:57.653 fused_ordering(991) 00:14:57.653 fused_ordering(992) 00:14:57.653 fused_ordering(993) 00:14:57.653 fused_ordering(994) 00:14:57.653 fused_ordering(995) 00:14:57.653 fused_ordering(996) 00:14:57.653 fused_ordering(997) 00:14:57.653 fused_ordering(998) 00:14:57.653 fused_ordering(999) 00:14:57.653 fused_ordering(1000) 00:14:57.653 fused_ordering(1001) 00:14:57.653 fused_ordering(1002) 00:14:57.653 fused_ordering(1003) 00:14:57.653 fused_ordering(1004) 00:14:57.653 fused_ordering(1005) 00:14:57.653 fused_ordering(1006) 00:14:57.653 fused_ordering(1007) 00:14:57.653 fused_ordering(1008) 00:14:57.653 fused_ordering(1009) 00:14:57.653 fused_ordering(1010) 00:14:57.653 fused_ordering(1011) 00:14:57.653 fused_ordering(1012) 00:14:57.653 fused_ordering(1013) 00:14:57.653 fused_ordering(1014) 00:14:57.653 fused_ordering(1015) 00:14:57.653 fused_ordering(1016) 00:14:57.653 fused_ordering(1017) 00:14:57.653 fused_ordering(1018) 00:14:57.653 fused_ordering(1019) 00:14:57.653 fused_ordering(1020) 00:14:57.653 fused_ordering(1021) 00:14:57.653 fused_ordering(1022) 00:14:57.653 fused_ordering(1023) 00:14:57.653 15:33:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:57.653 15:33:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:57.653 15:33:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:57.653 15:33:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:57.653 15:33:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:57.653 15:33:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:57.653 15:33:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:57.653 15:33:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:57.653 rmmod nvme_tcp 00:14:57.653 rmmod nvme_fabrics 00:14:57.653 rmmod nvme_keyring 00:14:57.653 15:33:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:57.653 15:33:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:57.653 15:33:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:57.653 15:33:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1262107 ']' 00:14:57.653 15:33:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1262107 00:14:57.653 15:33:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 1262107 ']' 00:14:57.653 15:33:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 1262107 00:14:57.653 15:33:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:14:57.653 15:33:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:57.653 15:33:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1262107 00:14:57.653 15:33:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:57.653 15:33:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:57.653 15:33:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1262107' 00:14:57.653 killing process with pid 1262107 00:14:57.653 15:33:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 1262107 00:14:57.653 [2024-05-15 15:33:10.539116] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:57.653 15:33:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 1262107 00:14:57.911 15:33:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:57.911 15:33:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:57.911 15:33:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:57.911 15:33:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:57.911 15:33:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:57.911 15:33:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.911 15:33:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.911 15:33:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.814 15:33:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:59.814 00:14:59.814 real 0m8.282s 00:14:59.814 user 0m5.581s 00:14:59.814 sys 0m3.826s 00:14:59.814 15:33:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:59.814 15:33:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:59.814 ************************************ 00:14:59.814 END TEST nvmf_fused_ordering 00:14:59.814 ************************************ 00:14:59.814 15:33:12 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:59.814 15:33:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:59.814 15:33:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:59.814 15:33:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:59.814 ************************************ 00:14:59.814 START TEST nvmf_delete_subsystem 00:14:59.814 ************************************ 00:14:59.814 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:59.814 * Looking for test storage... 00:15:00.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:00.072 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:00.072 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:15:00.073 15:33:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:02.605 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:02.605 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:02.606 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:02.606 Found net devices under 0000:09:00.0: cvl_0_0 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:02.606 Found net devices under 0000:09:00.1: cvl_0_1 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:02.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:15:02.606 00:15:02.606 --- 10.0.0.2 ping statistics --- 00:15:02.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.606 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:02.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:15:02.606 00:15:02.606 --- 10.0.0.1 ping statistics --- 00:15:02.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.606 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1265246 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1265246 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 1265246 ']' 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:02.606 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:02.606 [2024-05-15 15:33:15.627345] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:15:02.606 [2024-05-15 15:33:15.627418] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.606 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.606 [2024-05-15 15:33:15.670394] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:02.864 [2024-05-15 15:33:15.708334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:02.864 [2024-05-15 15:33:15.795931] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.864 [2024-05-15 15:33:15.795994] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.865 [2024-05-15 15:33:15.796010] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.865 [2024-05-15 15:33:15.796023] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.865 [2024-05-15 15:33:15.796035] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.865 [2024-05-15 15:33:15.796126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.865 [2024-05-15 15:33:15.796132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.865 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:02.865 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:15:02.865 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:02.865 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:02.865 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:02.865 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.865 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:02.865 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.865 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:02.865 [2024-05-15 15:33:15.945603] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.865 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.865 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:02.865 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.865 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:02.865 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.865 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:02.865 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.865 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:02.865 [2024-05-15 15:33:15.961620] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:02.865 [2024-05-15 15:33:15.961929] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.865 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.865 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:03.123 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.123 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:03.123 NULL1 00:15:03.123 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.123 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:03.123 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.123 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:03.123 Delay0 00:15:03.123 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.123 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:03.123 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.123 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:03.123 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.123 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1265385 00:15:03.123 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:03.123 15:33:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:03.123 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.123 [2024-05-15 15:33:16.036521] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:05.058 15:33:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.058 15:33:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.058 15:33:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:05.316 Write completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 starting I/O failed: -6 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Write completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 starting I/O failed: -6 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Write completed with error (sct=0, sc=8) 00:15:05.316 Write completed with error (sct=0, sc=8) 00:15:05.316 starting I/O failed: -6 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 starting I/O failed: -6 00:15:05.316 Write completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 starting I/O failed: -6 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Write completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 starting I/O failed: -6 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 starting I/O failed: -6 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Write completed with error (sct=0, sc=8) 00:15:05.316 starting I/O failed: -6 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 starting I/O failed: -6 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 starting I/O failed: -6 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Write completed with error (sct=0, sc=8) 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.316 Write completed with error (sct=0, sc=8) 00:15:05.316 starting I/O failed: -6 00:15:05.316 Write completed with error (sct=0, sc=8) 00:15:05.316 [2024-05-15 15:33:18.209292] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fddbc00c600 is same with the state(5) to be set 00:15:05.316 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 starting I/O failed: -6 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 starting I/O failed: -6 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 starting I/O failed: -6 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 starting I/O failed: -6 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 starting I/O failed: -6 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 starting I/O failed: -6 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 starting I/O failed: -6 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 starting I/O failed: -6 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 starting I/O failed: -6 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 starting I/O failed: -6 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 starting I/O failed: -6 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 starting I/O failed: -6 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 [2024-05-15 15:33:18.210132] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeffbd0 is same with the state(5) to be set 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Write completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:05.317 Read completed with error (sct=0, sc=8) 00:15:06.249 [2024-05-15 15:33:19.175570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeff5b0 is same with the state(5) to be set 00:15:06.249 Read completed with error (sct=0, sc=8) 00:15:06.249 Write completed with error (sct=0, sc=8) 00:15:06.249 Read completed with error (sct=0, sc=8) 00:15:06.249 Write completed with error (sct=0, sc=8) 00:15:06.249 Read completed with error (sct=0, sc=8) 00:15:06.249 Write completed with error (sct=0, sc=8) 00:15:06.249 Read completed with error (sct=0, sc=8) 00:15:06.249 Write completed with error (sct=0, sc=8) 00:15:06.249 Read completed with error (sct=0, sc=8) 00:15:06.249 Read completed with error (sct=0, sc=8) 00:15:06.249 Write completed with error (sct=0, sc=8) 00:15:06.249 Write completed with error (sct=0, sc=8) 00:15:06.249 Write completed with error (sct=0, sc=8) 00:15:06.249 Read completed with error (sct=0, sc=8) 00:15:06.249 Read completed with error (sct=0, sc=8) 00:15:06.249 Read completed with error (sct=0, sc=8) 00:15:06.249 Write completed with error (sct=0, sc=8) 00:15:06.249 Read completed with error (sct=0, sc=8) 00:15:06.249 Read completed with error (sct=0, sc=8) 00:15:06.249 Read completed with error (sct=0, sc=8) 00:15:06.249 Read completed with error (sct=0, sc=8) 00:15:06.249 Read completed with error (sct=0, sc=8) 00:15:06.249 Read completed with error (sct=0, sc=8) 00:15:06.249 Write completed with error (sct=0, sc=8) 00:15:06.249 Read completed with error (sct=0, sc=8) 00:15:06.249 [2024-05-15 15:33:19.209909] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fddbc000c00 is same with the state(5) to be set 00:15:06.249 Read completed with error (sct=0, sc=8) 00:15:06.249 Write completed with error (sct=0, sc=8) 00:15:06.249 Read completed with error (sct=0, sc=8) 00:15:06.249 Write completed with error (sct=0, sc=8) 00:15:06.249 Read completed with error (sct=0, sc=8) 00:15:06.249 Write completed with error (sct=0, sc=8) 00:15:06.249 Read completed with error (sct=0, sc=8) 00:15:06.249 Read completed with error (sct=0, sc=8) 00:15:06.249 Write completed with error (sct=0, sc=8) 00:15:06.249 Read completed with error (sct=0, sc=8) 00:15:06.249 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Write completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 [2024-05-15 15:33:19.210992] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fddbc00c2f0 is same with the state(5) to be set 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Write completed with error (sct=0, sc=8) 00:15:06.250 Write completed with error (sct=0, sc=8) 00:15:06.250 Write completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Write completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Write completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 [2024-05-15 15:33:19.211741] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee7ca0 is same with the state(5) to be set 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Write completed with error (sct=0, sc=8) 00:15:06.250 Write completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Write completed with error (sct=0, sc=8) 00:15:06.250 Write completed with error (sct=0, sc=8) 00:15:06.250 Write completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Write completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Write completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Write completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Read completed with error (sct=0, sc=8) 00:15:06.250 Write completed with error (sct=0, sc=8) 00:15:06.250 [2024-05-15 15:33:19.211935] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee1980 is same with the state(5) to be set 00:15:06.250 Initializing NVMe Controllers 00:15:06.250 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:06.250 Controller IO queue size 128, less than required. 00:15:06.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:06.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:06.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:06.250 Initialization complete. Launching workers. 00:15:06.250 ======================================================== 00:15:06.250 Latency(us) 00:15:06.250 Device Information : IOPS MiB/s Average min max 00:15:06.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 175.61 0.09 884359.15 453.64 1013677.86 00:15:06.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.65 0.08 897519.60 693.18 1012621.72 00:15:06.250 ======================================================== 00:15:06.250 Total : 345.26 0.17 890825.92 453.64 1013677.86 00:15:06.250 00:15:06.250 [2024-05-15 15:33:19.212854] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeff5b0 (9): Bad file descriptor 00:15:06.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:06.250 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.250 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:15:06.250 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1265385 00:15:06.250 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1265385 00:15:06.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1265385) - No such process 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1265385 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1265385 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1265385 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:06.816 [2024-05-15 15:33:19.736938] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1265789 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1265789 00:15:06.816 15:33:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:06.816 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.816 [2024-05-15 15:33:19.799203] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:07.381 15:33:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:07.381 15:33:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1265789 00:15:07.381 15:33:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:07.946 15:33:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:07.946 15:33:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1265789 00:15:07.946 15:33:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:08.203 15:33:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:08.203 15:33:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1265789 00:15:08.203 15:33:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:08.768 15:33:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:08.768 15:33:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1265789 00:15:08.768 15:33:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:09.332 15:33:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:09.332 15:33:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1265789 00:15:09.332 15:33:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:09.896 15:33:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:09.896 15:33:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1265789 00:15:09.896 15:33:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:09.896 Initializing NVMe Controllers 00:15:09.896 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:09.896 Controller IO queue size 128, less than required. 00:15:09.896 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:09.896 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:09.896 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:09.896 Initialization complete. Launching workers. 00:15:09.896 ======================================================== 00:15:09.896 Latency(us) 00:15:09.896 Device Information : IOPS MiB/s Average min max 00:15:09.896 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004013.58 1000225.18 1041575.65 00:15:09.896 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005424.89 1000224.07 1043745.74 00:15:09.896 ======================================================== 00:15:09.896 Total : 256.00 0.12 1004719.23 1000224.07 1043745.74 00:15:09.896 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1265789 00:15:10.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1265789) - No such process 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1265789 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:10.461 rmmod nvme_tcp 00:15:10.461 rmmod nvme_fabrics 00:15:10.461 rmmod nvme_keyring 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1265246 ']' 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1265246 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 1265246 ']' 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 1265246 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1265246 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1265246' 00:15:10.461 killing process with pid 1265246 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 1265246 00:15:10.461 [2024-05-15 15:33:23.353018] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:10.461 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 1265246 00:15:10.721 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:10.721 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:10.721 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:10.721 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:10.721 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:10.721 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.721 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.721 15:33:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.621 15:33:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:12.621 00:15:12.621 real 0m12.747s 00:15:12.621 user 0m27.923s 00:15:12.621 sys 0m3.243s 00:15:12.621 15:33:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:12.621 15:33:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:12.621 ************************************ 00:15:12.621 END TEST nvmf_delete_subsystem 00:15:12.621 ************************************ 00:15:12.621 15:33:25 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:12.621 15:33:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:12.621 15:33:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:12.621 15:33:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:12.621 ************************************ 00:15:12.621 START TEST nvmf_ns_masking 00:15:12.621 ************************************ 00:15:12.621 15:33:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:12.621 * Looking for test storage... 00:15:12.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:12.621 15:33:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:12.621 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:12.621 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.621 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.621 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.621 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.621 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.621 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.621 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.621 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.621 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.621 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.878 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:12.878 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:12.878 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.878 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.878 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:12.878 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:12.878 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:12.878 15:33:25 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.878 15:33:25 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=fc58a2ac-985c-4474-8890-67442674a3a5 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:15:12.879 15:33:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:15.404 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:15.405 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:15.405 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:15.405 Found net devices under 0000:09:00.0: cvl_0_0 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:15.405 Found net devices under 0000:09:00.1: cvl_0_1 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:15.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:15.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:15:15.405 00:15:15.405 --- 10.0.0.2 ping statistics --- 00:15:15.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.405 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:15.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:15.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:15:15.405 00:15:15.405 --- 10.0.0.1 ping statistics --- 00:15:15.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.405 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1268548 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1268548 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 1268548 ']' 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:15.405 15:33:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:15.405 [2024-05-15 15:33:28.351567] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:15:15.405 [2024-05-15 15:33:28.351662] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.405 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.405 [2024-05-15 15:33:28.395999] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:15.405 [2024-05-15 15:33:28.427161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:15.662 [2024-05-15 15:33:28.515094] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.662 [2024-05-15 15:33:28.515145] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.662 [2024-05-15 15:33:28.515173] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.662 [2024-05-15 15:33:28.515184] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.662 [2024-05-15 15:33:28.515194] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.662 [2024-05-15 15:33:28.515271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.662 [2024-05-15 15:33:28.515338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.662 [2024-05-15 15:33:28.515404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:15.662 [2024-05-15 15:33:28.515406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.662 15:33:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:15.662 15:33:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:15:15.662 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:15.662 15:33:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:15.662 15:33:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:15.662 15:33:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.662 15:33:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:15.919 [2024-05-15 15:33:28.943868] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:15.919 15:33:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:15:15.919 15:33:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:15:15.919 15:33:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:16.176 Malloc1 00:15:16.176 15:33:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:16.433 Malloc2 00:15:16.433 15:33:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:16.690 15:33:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:16.948 15:33:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:17.206 [2024-05-15 15:33:30.220844] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:17.206 [2024-05-15 15:33:30.221179] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.206 15:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:15:17.206 15:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fc58a2ac-985c-4474-8890-67442674a3a5 -a 10.0.0.2 -s 4420 -i 4 00:15:17.462 15:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:15:17.462 15:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:15:17.462 15:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:17.462 15:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:17.462 15:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:19.356 15:33:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:19.356 15:33:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:19.356 15:33:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:19.356 15:33:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:19.356 15:33:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:19.356 15:33:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:19.356 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:19.356 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:19.356 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:19.356 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:19.356 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:15:19.356 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:19.356 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:19.356 [ 0]:0x1 00:15:19.356 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:19.356 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:19.356 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=98b8d9d13734406ebc5a6ac0e16d21e6 00:15:19.356 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 98b8d9d13734406ebc5a6ac0e16d21e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:19.357 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:19.613 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:15:19.613 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:19.613 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:19.613 [ 0]:0x1 00:15:19.613 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:19.613 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:19.870 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=98b8d9d13734406ebc5a6ac0e16d21e6 00:15:19.870 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 98b8d9d13734406ebc5a6ac0e16d21e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:19.870 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:15:19.870 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:19.870 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:19.870 [ 1]:0x2 00:15:19.870 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:19.870 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:19.870 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=47abd09d12364156ad503c8727be8bae 00:15:19.870 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 47abd09d12364156ad503c8727be8bae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:19.870 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:15:19.870 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:19.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.871 15:33:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.127 15:33:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:20.384 15:33:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:15:20.384 15:33:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fc58a2ac-985c-4474-8890-67442674a3a5 -a 10.0.0.2 -s 4420 -i 4 00:15:20.384 15:33:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:20.384 15:33:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:15:20.384 15:33:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:20.384 15:33:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:15:20.384 15:33:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:15:20.384 15:33:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:22.907 [ 0]:0x2 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=47abd09d12364156ad503c8727be8bae 00:15:22.907 15:33:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 47abd09d12364156ad503c8727be8bae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:22.908 15:33:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:22.908 15:33:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:15:22.908 15:33:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:22.908 15:33:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:22.908 [ 0]:0x1 00:15:22.908 15:33:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:22.908 15:33:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:23.165 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=98b8d9d13734406ebc5a6ac0e16d21e6 00:15:23.165 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 98b8d9d13734406ebc5a6ac0e16d21e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:23.165 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:15:23.165 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:23.165 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:23.165 [ 1]:0x2 00:15:23.165 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:23.165 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:23.165 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=47abd09d12364156ad503c8727be8bae 00:15:23.165 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 47abd09d12364156ad503c8727be8bae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:23.165 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:23.422 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:15:23.422 15:33:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:23.422 15:33:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:23.422 15:33:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:23.422 15:33:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.422 15:33:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:23.422 15:33:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.422 15:33:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:23.422 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:23.422 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:23.422 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:23.422 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:23.422 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:23.423 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:23.423 15:33:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:23.423 15:33:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:23.423 15:33:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:23.423 15:33:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:23.423 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:15:23.423 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:23.423 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:23.423 [ 0]:0x2 00:15:23.423 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:23.423 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:23.423 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=47abd09d12364156ad503c8727be8bae 00:15:23.423 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 47abd09d12364156ad503c8727be8bae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:23.423 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:15:23.423 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:23.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.423 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:23.680 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:15:23.680 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fc58a2ac-985c-4474-8890-67442674a3a5 -a 10.0.0.2 -s 4420 -i 4 00:15:23.680 15:33:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:23.680 15:33:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:15:23.680 15:33:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:23.680 15:33:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:15:23.680 15:33:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:15:23.680 15:33:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:26.203 [ 0]:0x1 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=98b8d9d13734406ebc5a6ac0e16d21e6 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 98b8d9d13734406ebc5a6ac0e16d21e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:26.203 [ 1]:0x2 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=47abd09d12364156ad503c8727be8bae 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 47abd09d12364156ad503c8727be8bae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:26.203 15:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:26.203 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:15:26.203 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:26.203 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:26.203 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:26.203 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.203 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:26.203 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.203 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:26.203 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:26.203 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:26.203 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:26.203 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:26.203 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:26.203 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:26.203 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:26.203 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:26.203 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:26.203 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:26.203 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:15:26.203 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:26.203 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:26.203 [ 0]:0x2 00:15:26.203 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:26.203 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:26.460 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=47abd09d12364156ad503c8727be8bae 00:15:26.460 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 47abd09d12364156ad503c8727be8bae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:26.460 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:26.460 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:26.460 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:26.460 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.460 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.460 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.460 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.460 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.461 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.461 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.461 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:26.461 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:26.718 [2024-05-15 15:33:39.595768] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:26.718 request: 00:15:26.718 { 00:15:26.718 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:26.718 "nsid": 2, 00:15:26.718 "host": "nqn.2016-06.io.spdk:host1", 00:15:26.718 "method": "nvmf_ns_remove_host", 00:15:26.718 "req_id": 1 00:15:26.718 } 00:15:26.718 Got JSON-RPC error response 00:15:26.718 response: 00:15:26.718 { 00:15:26.718 "code": -32602, 00:15:26.718 "message": "Invalid parameters" 00:15:26.718 } 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:26.718 [ 0]:0x2 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:26.718 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:26.719 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=47abd09d12364156ad503c8727be8bae 00:15:26.719 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 47abd09d12364156ad503c8727be8bae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:26.719 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:15:26.719 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:26.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.719 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:26.978 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:26.978 15:33:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:15:26.978 15:33:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:26.978 15:33:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:26.978 15:33:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:26.978 15:33:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:26.978 15:33:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:26.978 15:33:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:26.978 rmmod nvme_tcp 00:15:26.978 rmmod nvme_fabrics 00:15:26.978 rmmod nvme_keyring 00:15:26.978 15:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:26.978 15:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:26.978 15:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:26.978 15:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1268548 ']' 00:15:26.978 15:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1268548 00:15:26.978 15:33:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 1268548 ']' 00:15:26.978 15:33:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 1268548 00:15:26.978 15:33:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:15:26.978 15:33:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:26.978 15:33:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1268548 00:15:26.978 15:33:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:26.978 15:33:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:26.978 15:33:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1268548' 00:15:26.978 killing process with pid 1268548 00:15:26.978 15:33:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 1268548 00:15:26.978 [2024-05-15 15:33:40.041500] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:26.978 15:33:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 1268548 00:15:27.236 15:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:27.236 15:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:27.236 15:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:27.236 15:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:27.236 15:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:27.236 15:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.236 15:33:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.236 15:33:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.764 15:33:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:29.764 00:15:29.764 real 0m16.708s 00:15:29.764 user 0m50.657s 00:15:29.764 sys 0m4.036s 00:15:29.764 15:33:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:29.764 15:33:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:29.764 ************************************ 00:15:29.764 END TEST nvmf_ns_masking 00:15:29.764 ************************************ 00:15:29.764 15:33:42 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:29.764 15:33:42 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:29.764 15:33:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:29.764 15:33:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:29.764 15:33:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:29.764 ************************************ 00:15:29.764 START TEST nvmf_nvme_cli 00:15:29.764 ************************************ 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:29.764 * Looking for test storage... 00:15:29.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:29.764 15:33:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:32.357 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:32.358 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:32.358 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:32.358 Found net devices under 0000:09:00.0: cvl_0_0 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:32.358 Found net devices under 0000:09:00.1: cvl_0_1 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:32.358 15:33:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:32.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:32.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:15:32.358 00:15:32.358 --- 10.0.0.2 ping statistics --- 00:15:32.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.358 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:32.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:32.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:15:32.358 00:15:32.358 --- 10.0.0.1 ping statistics --- 00:15:32.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.358 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1272276 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1272276 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 1272276 ']' 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.358 [2024-05-15 15:33:45.098790] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:15:32.358 [2024-05-15 15:33:45.098876] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.358 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.358 [2024-05-15 15:33:45.143011] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:32.358 [2024-05-15 15:33:45.181730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:32.358 [2024-05-15 15:33:45.270927] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.358 [2024-05-15 15:33:45.270984] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.358 [2024-05-15 15:33:45.271001] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.358 [2024-05-15 15:33:45.271014] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.358 [2024-05-15 15:33:45.271027] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.358 [2024-05-15 15:33:45.271121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.358 [2024-05-15 15:33:45.271199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:32.358 [2024-05-15 15:33:45.271294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:32.358 [2024-05-15 15:33:45.271297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.358 [2024-05-15 15:33:45.425132] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.358 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.359 15:33:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:32.359 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.359 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.359 Malloc0 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.616 Malloc1 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.616 [2024-05-15 15:33:45.510292] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:32.616 [2024-05-15 15:33:45.510627] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:15:32.616 00:15:32.616 Discovery Log Number of Records 2, Generation counter 2 00:15:32.616 =====Discovery Log Entry 0====== 00:15:32.616 trtype: tcp 00:15:32.616 adrfam: ipv4 00:15:32.616 subtype: current discovery subsystem 00:15:32.616 treq: not required 00:15:32.616 portid: 0 00:15:32.616 trsvcid: 4420 00:15:32.616 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:32.616 traddr: 10.0.0.2 00:15:32.616 eflags: explicit discovery connections, duplicate discovery information 00:15:32.616 sectype: none 00:15:32.616 =====Discovery Log Entry 1====== 00:15:32.616 trtype: tcp 00:15:32.616 adrfam: ipv4 00:15:32.616 subtype: nvme subsystem 00:15:32.616 treq: not required 00:15:32.616 portid: 0 00:15:32.616 trsvcid: 4420 00:15:32.616 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:32.616 traddr: 10.0.0.2 00:15:32.616 eflags: none 00:15:32.616 sectype: none 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:32.616 15:33:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:33.182 15:33:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:33.182 15:33:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:15:33.182 15:33:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:33.182 15:33:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:15:33.182 15:33:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:15:33.182 15:33:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:35.708 /dev/nvme0n1 ]] 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:35.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:35.708 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:35.708 rmmod nvme_tcp 00:15:35.708 rmmod nvme_fabrics 00:15:35.965 rmmod nvme_keyring 00:15:35.965 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:35.965 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:35.965 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:35.965 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1272276 ']' 00:15:35.965 15:33:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1272276 00:15:35.965 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 1272276 ']' 00:15:35.965 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 1272276 00:15:35.965 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:15:35.965 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:35.965 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1272276 00:15:35.965 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:35.965 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:35.965 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1272276' 00:15:35.965 killing process with pid 1272276 00:15:35.965 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 1272276 00:15:35.965 [2024-05-15 15:33:48.865934] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:35.965 15:33:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 1272276 00:15:36.223 15:33:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:36.223 15:33:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:36.223 15:33:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:36.223 15:33:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:36.223 15:33:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:36.223 15:33:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.223 15:33:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.223 15:33:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.124 15:33:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:38.124 00:15:38.124 real 0m8.781s 00:15:38.124 user 0m15.976s 00:15:38.124 sys 0m2.456s 00:15:38.124 15:33:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:38.124 15:33:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:38.124 ************************************ 00:15:38.124 END TEST nvmf_nvme_cli 00:15:38.124 ************************************ 00:15:38.383 15:33:51 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:38.383 15:33:51 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:38.383 15:33:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:38.383 15:33:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:38.383 15:33:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:38.383 ************************************ 00:15:38.383 START TEST nvmf_vfio_user 00:15:38.383 ************************************ 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:38.383 * Looking for test storage... 00:15:38.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.383 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1273197 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1273197' 00:15:38.384 Process pid: 1273197 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1273197 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 1273197 ']' 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:38.384 15:33:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:38.384 [2024-05-15 15:33:51.387298] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:15:38.384 [2024-05-15 15:33:51.387374] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.384 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.384 [2024-05-15 15:33:51.424547] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:38.384 [2024-05-15 15:33:51.457392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:38.642 [2024-05-15 15:33:51.541863] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.642 [2024-05-15 15:33:51.541914] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.642 [2024-05-15 15:33:51.541942] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.642 [2024-05-15 15:33:51.541960] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.642 [2024-05-15 15:33:51.541971] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.642 [2024-05-15 15:33:51.542049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.642 [2024-05-15 15:33:51.544250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.642 [2024-05-15 15:33:51.544277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:38.642 [2024-05-15 15:33:51.544280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.642 15:33:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:38.642 15:33:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:15:38.642 15:33:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:39.575 15:33:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:39.832 15:33:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:39.832 15:33:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:39.832 15:33:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:39.832 15:33:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:39.832 15:33:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:40.089 Malloc1 00:15:40.089 15:33:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:40.346 15:33:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:40.603 15:33:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:40.860 [2024-05-15 15:33:53.890278] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:40.860 15:33:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:40.860 15:33:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:40.860 15:33:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:41.117 Malloc2 00:15:41.117 15:33:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:41.374 15:33:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:41.631 15:33:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:41.888 15:33:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:41.888 15:33:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:41.888 15:33:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:41.888 15:33:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:41.888 15:33:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:41.888 15:33:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:41.888 [2024-05-15 15:33:54.927110] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:15:41.888 [2024-05-15 15:33:54.927153] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1273618 ] 00:15:41.888 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.888 [2024-05-15 15:33:54.944948] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:41.888 [2024-05-15 15:33:54.962668] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:41.888 [2024-05-15 15:33:54.968657] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:41.888 [2024-05-15 15:33:54.968684] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbe675ec000 00:15:41.888 [2024-05-15 15:33:54.969653] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:41.888 [2024-05-15 15:33:54.970644] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:41.888 [2024-05-15 15:33:54.971650] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:41.888 [2024-05-15 15:33:54.972650] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:41.888 [2024-05-15 15:33:54.973659] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:41.888 [2024-05-15 15:33:54.974660] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:41.888 [2024-05-15 15:33:54.975673] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:41.888 [2024-05-15 15:33:54.976673] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:41.888 [2024-05-15 15:33:54.977683] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:41.888 [2024-05-15 15:33:54.977703] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbe6639d000 00:15:41.888 [2024-05-15 15:33:54.978855] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:42.147 [2024-05-15 15:33:54.993091] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:42.147 [2024-05-15 15:33:54.993134] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:42.147 [2024-05-15 15:33:55.001818] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:42.147 [2024-05-15 15:33:55.001876] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:42.147 [2024-05-15 15:33:55.001968] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:42.147 [2024-05-15 15:33:55.002000] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:42.147 [2024-05-15 15:33:55.002011] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:42.147 [2024-05-15 15:33:55.002815] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:42.147 [2024-05-15 15:33:55.002836] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:42.147 [2024-05-15 15:33:55.002848] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:42.147 [2024-05-15 15:33:55.003820] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:42.147 [2024-05-15 15:33:55.003838] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:42.147 [2024-05-15 15:33:55.003851] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:42.147 [2024-05-15 15:33:55.004827] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:42.147 [2024-05-15 15:33:55.004844] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:42.147 [2024-05-15 15:33:55.005835] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:42.147 [2024-05-15 15:33:55.005854] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:42.147 [2024-05-15 15:33:55.005863] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:42.147 [2024-05-15 15:33:55.005874] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:42.147 [2024-05-15 15:33:55.005983] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:42.147 [2024-05-15 15:33:55.005992] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:42.147 [2024-05-15 15:33:55.006000] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:42.147 [2024-05-15 15:33:55.006849] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:42.147 [2024-05-15 15:33:55.007851] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:42.147 [2024-05-15 15:33:55.008859] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:42.147 [2024-05-15 15:33:55.009854] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:42.147 [2024-05-15 15:33:55.009953] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:42.147 [2024-05-15 15:33:55.010868] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:42.147 [2024-05-15 15:33:55.010886] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:42.147 [2024-05-15 15:33:55.010895] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:42.147 [2024-05-15 15:33:55.010918] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:42.147 [2024-05-15 15:33:55.010931] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:42.147 [2024-05-15 15:33:55.010963] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:42.148 [2024-05-15 15:33:55.010973] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:42.148 [2024-05-15 15:33:55.010995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:42.148 [2024-05-15 15:33:55.011061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:42.148 [2024-05-15 15:33:55.011079] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:42.148 [2024-05-15 15:33:55.011088] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:42.148 [2024-05-15 15:33:55.011095] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:42.148 [2024-05-15 15:33:55.011102] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:42.148 [2024-05-15 15:33:55.011116] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:42.148 [2024-05-15 15:33:55.011125] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:42.148 [2024-05-15 15:33:55.011133] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:42.148 [2024-05-15 15:33:55.011145] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:42.148 [2024-05-15 15:33:55.011161] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:42.148 [2024-05-15 15:33:55.011174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:42.148 [2024-05-15 15:33:55.011205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.148 [2024-05-15 15:33:55.011226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.148 [2024-05-15 15:33:55.011240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.148 [2024-05-15 15:33:55.011252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.148 [2024-05-15 15:33:55.011261] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:42.148 [2024-05-15 15:33:55.011278] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:42.148 [2024-05-15 15:33:55.011292] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:42.148 [2024-05-15 15:33:55.011304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:42.148 [2024-05-15 15:33:55.011316] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:42.148 [2024-05-15 15:33:55.011325] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:42.148 [2024-05-15 15:33:55.011336] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:42.148 [2024-05-15 15:33:55.011351] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:42.148 [2024-05-15 15:33:55.011365] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:42.148 [2024-05-15 15:33:55.011379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:42.148 [2024-05-15 15:33:55.011436] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:42.148 [2024-05-15 15:33:55.011452] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:42.148 [2024-05-15 15:33:55.011465] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:42.148 [2024-05-15 15:33:55.011474] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:42.148 [2024-05-15 15:33:55.011484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:42.148 [2024-05-15 15:33:55.011513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:42.148 [2024-05-15 15:33:55.011540] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:42.148 [2024-05-15 15:33:55.011572] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:42.148 [2024-05-15 15:33:55.011587] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:42.148 [2024-05-15 15:33:55.011598] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:42.148 [2024-05-15 15:33:55.011606] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:42.148 [2024-05-15 15:33:55.011615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:42.148 [2024-05-15 15:33:55.011636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:42.148 [2024-05-15 15:33:55.011659] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:42.148 [2024-05-15 15:33:55.011674] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:42.148 [2024-05-15 15:33:55.011684] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:42.148 [2024-05-15 15:33:55.011692] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:42.148 [2024-05-15 15:33:55.011701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:42.148 [2024-05-15 15:33:55.011718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:42.148 [2024-05-15 15:33:55.011734] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:42.148 [2024-05-15 15:33:55.011744] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:42.148 [2024-05-15 15:33:55.011759] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:42.148 [2024-05-15 15:33:55.011769] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:42.148 [2024-05-15 15:33:55.011780] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:42.148 [2024-05-15 15:33:55.011789] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:42.148 [2024-05-15 15:33:55.011797] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:42.148 [2024-05-15 15:33:55.011805] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:42.148 [2024-05-15 15:33:55.011838] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:42.148 [2024-05-15 15:33:55.011856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:42.148 [2024-05-15 15:33:55.011875] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:42.148 [2024-05-15 15:33:55.011889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:42.148 [2024-05-15 15:33:55.011904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:42.148 [2024-05-15 15:33:55.011915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:42.148 [2024-05-15 15:33:55.011930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:42.148 [2024-05-15 15:33:55.011940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:42.148 [2024-05-15 15:33:55.011958] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:42.148 [2024-05-15 15:33:55.011966] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:42.148 [2024-05-15 15:33:55.011973] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:42.148 [2024-05-15 15:33:55.011979] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:42.148 [2024-05-15 15:33:55.011988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:42.148 [2024-05-15 15:33:55.011998] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:42.148 [2024-05-15 15:33:55.012006] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:42.148 [2024-05-15 15:33:55.012014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:42.148 [2024-05-15 15:33:55.012024] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:42.148 [2024-05-15 15:33:55.012032] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:42.148 [2024-05-15 15:33:55.012040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:42.148 [2024-05-15 15:33:55.012052] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:42.148 [2024-05-15 15:33:55.012059] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:42.148 [2024-05-15 15:33:55.012068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:42.148 [2024-05-15 15:33:55.012078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:42.148 [2024-05-15 15:33:55.012101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:42.148 [2024-05-15 15:33:55.012116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:42.148 [2024-05-15 15:33:55.012130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:42.148 ===================================================== 00:15:42.148 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:42.149 ===================================================== 00:15:42.149 Controller Capabilities/Features 00:15:42.149 ================================ 00:15:42.149 Vendor ID: 4e58 00:15:42.149 Subsystem Vendor ID: 4e58 00:15:42.149 Serial Number: SPDK1 00:15:42.149 Model Number: SPDK bdev Controller 00:15:42.149 Firmware Version: 24.05 00:15:42.149 Recommended Arb Burst: 6 00:15:42.149 IEEE OUI Identifier: 8d 6b 50 00:15:42.149 Multi-path I/O 00:15:42.149 May have multiple subsystem ports: Yes 00:15:42.149 May have multiple controllers: Yes 00:15:42.149 Associated with SR-IOV VF: No 00:15:42.149 Max Data Transfer Size: 131072 00:15:42.149 Max Number of Namespaces: 32 00:15:42.149 Max Number of I/O Queues: 127 00:15:42.149 NVMe Specification Version (VS): 1.3 00:15:42.149 NVMe Specification Version (Identify): 1.3 00:15:42.149 Maximum Queue Entries: 256 00:15:42.149 Contiguous Queues Required: Yes 00:15:42.149 Arbitration Mechanisms Supported 00:15:42.149 Weighted Round Robin: Not Supported 00:15:42.149 Vendor Specific: Not Supported 00:15:42.149 Reset Timeout: 15000 ms 00:15:42.149 Doorbell Stride: 4 bytes 00:15:42.149 NVM Subsystem Reset: Not Supported 00:15:42.149 Command Sets Supported 00:15:42.149 NVM Command Set: Supported 00:15:42.149 Boot Partition: Not Supported 00:15:42.149 Memory Page Size Minimum: 4096 bytes 00:15:42.149 Memory Page Size Maximum: 4096 bytes 00:15:42.149 Persistent Memory Region: Not Supported 00:15:42.149 Optional Asynchronous Events Supported 00:15:42.149 Namespace Attribute Notices: Supported 00:15:42.149 Firmware Activation Notices: Not Supported 00:15:42.149 ANA Change Notices: Not Supported 00:15:42.149 PLE Aggregate Log Change Notices: Not Supported 00:15:42.149 LBA Status Info Alert Notices: Not Supported 00:15:42.149 EGE Aggregate Log Change Notices: Not Supported 00:15:42.149 Normal NVM Subsystem Shutdown event: Not Supported 00:15:42.149 Zone Descriptor Change Notices: Not Supported 00:15:42.149 Discovery Log Change Notices: Not Supported 00:15:42.149 Controller Attributes 00:15:42.149 128-bit Host Identifier: Supported 00:15:42.149 Non-Operational Permissive Mode: Not Supported 00:15:42.149 NVM Sets: Not Supported 00:15:42.149 Read Recovery Levels: Not Supported 00:15:42.149 Endurance Groups: Not Supported 00:15:42.149 Predictable Latency Mode: Not Supported 00:15:42.149 Traffic Based Keep ALive: Not Supported 00:15:42.149 Namespace Granularity: Not Supported 00:15:42.149 SQ Associations: Not Supported 00:15:42.149 UUID List: Not Supported 00:15:42.149 Multi-Domain Subsystem: Not Supported 00:15:42.149 Fixed Capacity Management: Not Supported 00:15:42.149 Variable Capacity Management: Not Supported 00:15:42.149 Delete Endurance Group: Not Supported 00:15:42.149 Delete NVM Set: Not Supported 00:15:42.149 Extended LBA Formats Supported: Not Supported 00:15:42.149 Flexible Data Placement Supported: Not Supported 00:15:42.149 00:15:42.149 Controller Memory Buffer Support 00:15:42.149 ================================ 00:15:42.149 Supported: No 00:15:42.149 00:15:42.149 Persistent Memory Region Support 00:15:42.149 ================================ 00:15:42.149 Supported: No 00:15:42.149 00:15:42.149 Admin Command Set Attributes 00:15:42.149 ============================ 00:15:42.149 Security Send/Receive: Not Supported 00:15:42.149 Format NVM: Not Supported 00:15:42.149 Firmware Activate/Download: Not Supported 00:15:42.149 Namespace Management: Not Supported 00:15:42.149 Device Self-Test: Not Supported 00:15:42.149 Directives: Not Supported 00:15:42.149 NVMe-MI: Not Supported 00:15:42.149 Virtualization Management: Not Supported 00:15:42.149 Doorbell Buffer Config: Not Supported 00:15:42.149 Get LBA Status Capability: Not Supported 00:15:42.149 Command & Feature Lockdown Capability: Not Supported 00:15:42.149 Abort Command Limit: 4 00:15:42.149 Async Event Request Limit: 4 00:15:42.149 Number of Firmware Slots: N/A 00:15:42.149 Firmware Slot 1 Read-Only: N/A 00:15:42.149 Firmware Activation Without Reset: N/A 00:15:42.149 Multiple Update Detection Support: N/A 00:15:42.149 Firmware Update Granularity: No Information Provided 00:15:42.149 Per-Namespace SMART Log: No 00:15:42.149 Asymmetric Namespace Access Log Page: Not Supported 00:15:42.149 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:42.149 Command Effects Log Page: Supported 00:15:42.149 Get Log Page Extended Data: Supported 00:15:42.149 Telemetry Log Pages: Not Supported 00:15:42.149 Persistent Event Log Pages: Not Supported 00:15:42.149 Supported Log Pages Log Page: May Support 00:15:42.149 Commands Supported & Effects Log Page: Not Supported 00:15:42.149 Feature Identifiers & Effects Log Page:May Support 00:15:42.149 NVMe-MI Commands & Effects Log Page: May Support 00:15:42.149 Data Area 4 for Telemetry Log: Not Supported 00:15:42.149 Error Log Page Entries Supported: 128 00:15:42.149 Keep Alive: Supported 00:15:42.149 Keep Alive Granularity: 10000 ms 00:15:42.149 00:15:42.149 NVM Command Set Attributes 00:15:42.149 ========================== 00:15:42.149 Submission Queue Entry Size 00:15:42.149 Max: 64 00:15:42.149 Min: 64 00:15:42.149 Completion Queue Entry Size 00:15:42.149 Max: 16 00:15:42.149 Min: 16 00:15:42.149 Number of Namespaces: 32 00:15:42.149 Compare Command: Supported 00:15:42.149 Write Uncorrectable Command: Not Supported 00:15:42.149 Dataset Management Command: Supported 00:15:42.149 Write Zeroes Command: Supported 00:15:42.149 Set Features Save Field: Not Supported 00:15:42.149 Reservations: Not Supported 00:15:42.149 Timestamp: Not Supported 00:15:42.149 Copy: Supported 00:15:42.149 Volatile Write Cache: Present 00:15:42.149 Atomic Write Unit (Normal): 1 00:15:42.149 Atomic Write Unit (PFail): 1 00:15:42.149 Atomic Compare & Write Unit: 1 00:15:42.149 Fused Compare & Write: Supported 00:15:42.149 Scatter-Gather List 00:15:42.149 SGL Command Set: Supported (Dword aligned) 00:15:42.149 SGL Keyed: Not Supported 00:15:42.149 SGL Bit Bucket Descriptor: Not Supported 00:15:42.149 SGL Metadata Pointer: Not Supported 00:15:42.149 Oversized SGL: Not Supported 00:15:42.149 SGL Metadata Address: Not Supported 00:15:42.149 SGL Offset: Not Supported 00:15:42.149 Transport SGL Data Block: Not Supported 00:15:42.149 Replay Protected Memory Block: Not Supported 00:15:42.149 00:15:42.149 Firmware Slot Information 00:15:42.149 ========================= 00:15:42.149 Active slot: 1 00:15:42.149 Slot 1 Firmware Revision: 24.05 00:15:42.149 00:15:42.149 00:15:42.149 Commands Supported and Effects 00:15:42.149 ============================== 00:15:42.149 Admin Commands 00:15:42.149 -------------- 00:15:42.149 Get Log Page (02h): Supported 00:15:42.149 Identify (06h): Supported 00:15:42.149 Abort (08h): Supported 00:15:42.149 Set Features (09h): Supported 00:15:42.149 Get Features (0Ah): Supported 00:15:42.149 Asynchronous Event Request (0Ch): Supported 00:15:42.149 Keep Alive (18h): Supported 00:15:42.149 I/O Commands 00:15:42.149 ------------ 00:15:42.149 Flush (00h): Supported LBA-Change 00:15:42.149 Write (01h): Supported LBA-Change 00:15:42.149 Read (02h): Supported 00:15:42.149 Compare (05h): Supported 00:15:42.149 Write Zeroes (08h): Supported LBA-Change 00:15:42.149 Dataset Management (09h): Supported LBA-Change 00:15:42.149 Copy (19h): Supported LBA-Change 00:15:42.149 Unknown (79h): Supported LBA-Change 00:15:42.149 Unknown (7Ah): Supported 00:15:42.149 00:15:42.149 Error Log 00:15:42.149 ========= 00:15:42.149 00:15:42.149 Arbitration 00:15:42.149 =========== 00:15:42.149 Arbitration Burst: 1 00:15:42.149 00:15:42.149 Power Management 00:15:42.149 ================ 00:15:42.149 Number of Power States: 1 00:15:42.149 Current Power State: Power State #0 00:15:42.149 Power State #0: 00:15:42.149 Max Power: 0.00 W 00:15:42.149 Non-Operational State: Operational 00:15:42.149 Entry Latency: Not Reported 00:15:42.149 Exit Latency: Not Reported 00:15:42.149 Relative Read Throughput: 0 00:15:42.149 Relative Read Latency: 0 00:15:42.149 Relative Write Throughput: 0 00:15:42.149 Relative Write Latency: 0 00:15:42.149 Idle Power: Not Reported 00:15:42.149 Active Power: Not Reported 00:15:42.149 Non-Operational Permissive Mode: Not Supported 00:15:42.149 00:15:42.149 Health Information 00:15:42.149 ================== 00:15:42.149 Critical Warnings: 00:15:42.149 Available Spare Space: OK 00:15:42.149 Temperature: OK 00:15:42.149 Device Reliability: OK 00:15:42.149 Read Only: No 00:15:42.149 Volatile Memory Backup: OK 00:15:42.149 Current Temperature: 0 Kelvin (-2[2024-05-15 15:33:55.012285] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:42.149 [2024-05-15 15:33:55.012303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:42.149 [2024-05-15 15:33:55.012346] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:42.149 [2024-05-15 15:33:55.012364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.150 [2024-05-15 15:33:55.012375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.150 [2024-05-15 15:33:55.012386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.150 [2024-05-15 15:33:55.012396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.150 [2024-05-15 15:33:55.012891] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:42.150 [2024-05-15 15:33:55.012911] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:42.150 [2024-05-15 15:33:55.013887] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:42.150 [2024-05-15 15:33:55.013972] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:42.150 [2024-05-15 15:33:55.013987] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:42.150 [2024-05-15 15:33:55.014899] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:42.150 [2024-05-15 15:33:55.014921] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:42.150 [2024-05-15 15:33:55.014976] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:42.150 [2024-05-15 15:33:55.020226] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:42.150 73 Celsius) 00:15:42.150 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:42.150 Available Spare: 0% 00:15:42.150 Available Spare Threshold: 0% 00:15:42.150 Life Percentage Used: 0% 00:15:42.150 Data Units Read: 0 00:15:42.150 Data Units Written: 0 00:15:42.150 Host Read Commands: 0 00:15:42.150 Host Write Commands: 0 00:15:42.150 Controller Busy Time: 0 minutes 00:15:42.150 Power Cycles: 0 00:15:42.150 Power On Hours: 0 hours 00:15:42.150 Unsafe Shutdowns: 0 00:15:42.150 Unrecoverable Media Errors: 0 00:15:42.150 Lifetime Error Log Entries: 0 00:15:42.150 Warning Temperature Time: 0 minutes 00:15:42.150 Critical Temperature Time: 0 minutes 00:15:42.150 00:15:42.150 Number of Queues 00:15:42.150 ================ 00:15:42.150 Number of I/O Submission Queues: 127 00:15:42.150 Number of I/O Completion Queues: 127 00:15:42.150 00:15:42.150 Active Namespaces 00:15:42.150 ================= 00:15:42.150 Namespace ID:1 00:15:42.150 Error Recovery Timeout: Unlimited 00:15:42.150 Command Set Identifier: NVM (00h) 00:15:42.150 Deallocate: Supported 00:15:42.150 Deallocated/Unwritten Error: Not Supported 00:15:42.150 Deallocated Read Value: Unknown 00:15:42.150 Deallocate in Write Zeroes: Not Supported 00:15:42.150 Deallocated Guard Field: 0xFFFF 00:15:42.150 Flush: Supported 00:15:42.150 Reservation: Supported 00:15:42.150 Namespace Sharing Capabilities: Multiple Controllers 00:15:42.150 Size (in LBAs): 131072 (0GiB) 00:15:42.150 Capacity (in LBAs): 131072 (0GiB) 00:15:42.150 Utilization (in LBAs): 131072 (0GiB) 00:15:42.150 NGUID: 288D5F0AFB294374AD5988C6B7840E3E 00:15:42.150 UUID: 288d5f0a-fb29-4374-ad59-88c6b7840e3e 00:15:42.150 Thin Provisioning: Not Supported 00:15:42.150 Per-NS Atomic Units: Yes 00:15:42.150 Atomic Boundary Size (Normal): 0 00:15:42.150 Atomic Boundary Size (PFail): 0 00:15:42.150 Atomic Boundary Offset: 0 00:15:42.150 Maximum Single Source Range Length: 65535 00:15:42.150 Maximum Copy Length: 65535 00:15:42.150 Maximum Source Range Count: 1 00:15:42.150 NGUID/EUI64 Never Reused: No 00:15:42.150 Namespace Write Protected: No 00:15:42.150 Number of LBA Formats: 1 00:15:42.150 Current LBA Format: LBA Format #00 00:15:42.150 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:42.150 00:15:42.150 15:33:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:42.150 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.408 [2024-05-15 15:33:55.251032] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:47.666 Initializing NVMe Controllers 00:15:47.666 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:47.666 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:47.666 Initialization complete. Launching workers. 00:15:47.666 ======================================================== 00:15:47.666 Latency(us) 00:15:47.666 Device Information : IOPS MiB/s Average min max 00:15:47.666 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34058.40 133.04 3758.49 1174.38 9708.04 00:15:47.666 ======================================================== 00:15:47.666 Total : 34058.40 133.04 3758.49 1174.38 9708.04 00:15:47.666 00:15:47.666 [2024-05-15 15:34:00.273912] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:47.666 15:34:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:47.666 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.666 [2024-05-15 15:34:00.509080] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:53.066 Initializing NVMe Controllers 00:15:53.066 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:53.066 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:53.066 Initialization complete. Launching workers. 00:15:53.066 ======================================================== 00:15:53.066 Latency(us) 00:15:53.066 Device Information : IOPS MiB/s Average min max 00:15:53.066 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16048.00 62.69 7985.82 6637.13 14371.56 00:15:53.066 ======================================================== 00:15:53.066 Total : 16048.00 62.69 7985.82 6637.13 14371.56 00:15:53.066 00:15:53.066 [2024-05-15 15:34:05.542987] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:53.066 15:34:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:53.066 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.066 [2024-05-15 15:34:05.777081] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:58.329 [2024-05-15 15:34:10.852550] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:58.329 Initializing NVMe Controllers 00:15:58.329 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:58.329 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:58.329 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:58.329 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:58.329 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:58.329 Initialization complete. Launching workers. 00:15:58.329 Starting thread on core 2 00:15:58.329 Starting thread on core 3 00:15:58.329 Starting thread on core 1 00:15:58.329 15:34:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:58.329 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.329 [2024-05-15 15:34:11.150857] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:01.607 [2024-05-15 15:34:14.571494] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:01.607 Initializing NVMe Controllers 00:16:01.607 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:01.607 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:01.607 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:01.607 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:01.607 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:01.607 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:01.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:01.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:01.607 Initialization complete. Launching workers. 00:16:01.607 Starting thread on core 1 with urgent priority queue 00:16:01.607 Starting thread on core 2 with urgent priority queue 00:16:01.607 Starting thread on core 3 with urgent priority queue 00:16:01.607 Starting thread on core 0 with urgent priority queue 00:16:01.607 SPDK bdev Controller (SPDK1 ) core 0: 3991.33 IO/s 25.05 secs/100000 ios 00:16:01.607 SPDK bdev Controller (SPDK1 ) core 1: 3696.67 IO/s 27.05 secs/100000 ios 00:16:01.607 SPDK bdev Controller (SPDK1 ) core 2: 3451.00 IO/s 28.98 secs/100000 ios 00:16:01.607 SPDK bdev Controller (SPDK1 ) core 3: 3969.00 IO/s 25.20 secs/100000 ios 00:16:01.607 ======================================================== 00:16:01.607 00:16:01.607 15:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:01.607 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.864 [2024-05-15 15:34:14.887710] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:01.864 Initializing NVMe Controllers 00:16:01.864 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:01.864 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:01.864 Namespace ID: 1 size: 0GB 00:16:01.864 Initialization complete. 00:16:01.864 INFO: using host memory buffer for IO 00:16:01.864 Hello world! 00:16:01.864 [2024-05-15 15:34:14.925339] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:02.121 15:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:02.121 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.379 [2024-05-15 15:34:15.229744] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:03.310 Initializing NVMe Controllers 00:16:03.310 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:03.310 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:03.310 Initialization complete. Launching workers. 00:16:03.311 submit (in ns) avg, min, max = 7316.7, 3545.6, 4014105.6 00:16:03.311 complete (in ns) avg, min, max = 27471.1, 2072.2, 4015608.9 00:16:03.311 00:16:03.311 Submit histogram 00:16:03.311 ================ 00:16:03.311 Range in us Cumulative Count 00:16:03.311 3.532 - 3.556: 0.0456% ( 6) 00:16:03.311 3.556 - 3.579: 1.5435% ( 197) 00:16:03.311 3.579 - 3.603: 6.7898% ( 690) 00:16:03.311 3.603 - 3.627: 14.1195% ( 964) 00:16:03.311 3.627 - 3.650: 24.3689% ( 1348) 00:16:03.311 3.650 - 3.674: 34.0252% ( 1270) 00:16:03.311 3.674 - 3.698: 42.4118% ( 1103) 00:16:03.311 3.698 - 3.721: 48.5782% ( 811) 00:16:03.311 3.721 - 3.745: 52.9425% ( 574) 00:16:03.311 3.745 - 3.769: 56.8887% ( 519) 00:16:03.311 3.769 - 3.793: 60.6448% ( 494) 00:16:03.311 3.793 - 3.816: 63.5873% ( 387) 00:16:03.311 3.816 - 3.840: 66.8644% ( 431) 00:16:03.311 3.840 - 3.864: 70.9094% ( 532) 00:16:03.311 3.864 - 3.887: 75.5246% ( 607) 00:16:03.311 3.887 - 3.911: 80.1779% ( 612) 00:16:03.311 3.911 - 3.935: 83.6071% ( 451) 00:16:03.311 3.935 - 3.959: 85.6372% ( 267) 00:16:03.311 3.959 - 3.982: 87.4924% ( 244) 00:16:03.311 3.982 - 4.006: 89.1119% ( 213) 00:16:03.311 4.006 - 4.030: 90.2600% ( 151) 00:16:03.311 4.030 - 4.053: 91.0508% ( 104) 00:16:03.311 4.053 - 4.077: 91.9632% ( 120) 00:16:03.311 4.077 - 4.101: 92.8604% ( 118) 00:16:03.311 4.101 - 4.124: 93.8641% ( 132) 00:16:03.311 4.124 - 4.148: 94.6928% ( 109) 00:16:03.311 4.148 - 4.172: 95.2175% ( 69) 00:16:03.311 4.172 - 4.196: 95.5976% ( 50) 00:16:03.311 4.196 - 4.219: 95.9474% ( 46) 00:16:03.311 4.219 - 4.243: 96.1907% ( 32) 00:16:03.311 4.243 - 4.267: 96.4948% ( 40) 00:16:03.311 4.267 - 4.290: 96.6317% ( 18) 00:16:03.311 4.290 - 4.314: 96.7609% ( 17) 00:16:03.311 4.314 - 4.338: 96.8750% ( 15) 00:16:03.311 4.338 - 4.361: 96.9967% ( 16) 00:16:03.311 4.361 - 4.385: 97.0955% ( 13) 00:16:03.311 4.385 - 4.409: 97.1411% ( 6) 00:16:03.311 4.409 - 4.433: 97.1715% ( 4) 00:16:03.311 4.433 - 4.456: 97.2476% ( 10) 00:16:03.311 4.456 - 4.480: 97.2704% ( 3) 00:16:03.311 4.480 - 4.504: 97.3084% ( 5) 00:16:03.311 4.504 - 4.527: 97.3236% ( 2) 00:16:03.311 4.527 - 4.551: 97.3388% ( 2) 00:16:03.311 4.551 - 4.575: 97.3692% ( 4) 00:16:03.311 4.575 - 4.599: 97.3768% ( 1) 00:16:03.311 4.599 - 4.622: 97.3920% ( 2) 00:16:03.311 4.646 - 4.670: 97.4148% ( 3) 00:16:03.311 4.670 - 4.693: 97.4224% ( 1) 00:16:03.311 4.717 - 4.741: 97.4300% ( 1) 00:16:03.311 4.741 - 4.764: 97.4453% ( 2) 00:16:03.311 4.788 - 4.812: 97.4757% ( 4) 00:16:03.311 4.812 - 4.836: 97.5289% ( 7) 00:16:03.311 4.836 - 4.859: 97.5669% ( 5) 00:16:03.311 4.859 - 4.883: 97.5897% ( 3) 00:16:03.311 4.883 - 4.907: 97.6201% ( 4) 00:16:03.311 4.907 - 4.930: 97.6658% ( 6) 00:16:03.311 4.930 - 4.954: 97.7266% ( 8) 00:16:03.311 4.954 - 4.978: 97.7798% ( 7) 00:16:03.311 4.978 - 5.001: 97.8330% ( 7) 00:16:03.311 5.001 - 5.025: 97.8863% ( 7) 00:16:03.311 5.025 - 5.049: 97.9243% ( 5) 00:16:03.311 5.049 - 5.073: 97.9623% ( 5) 00:16:03.311 5.073 - 5.096: 97.9775% ( 2) 00:16:03.311 5.096 - 5.120: 98.0003% ( 3) 00:16:03.311 5.120 - 5.144: 98.0231% ( 3) 00:16:03.311 5.144 - 5.167: 98.0535% ( 4) 00:16:03.311 5.167 - 5.191: 98.0763% ( 3) 00:16:03.311 5.191 - 5.215: 98.0915% ( 2) 00:16:03.311 5.239 - 5.262: 98.1068% ( 2) 00:16:03.311 5.262 - 5.286: 98.1144% ( 1) 00:16:03.311 5.286 - 5.310: 98.1220% ( 1) 00:16:03.311 5.381 - 5.404: 98.1372% ( 2) 00:16:03.311 5.570 - 5.594: 98.1524% ( 2) 00:16:03.311 5.594 - 5.618: 98.1600% ( 1) 00:16:03.311 5.618 - 5.641: 98.1676% ( 1) 00:16:03.311 5.807 - 5.831: 98.1752% ( 1) 00:16:03.311 5.902 - 5.926: 98.1828% ( 1) 00:16:03.311 5.926 - 5.950: 98.2056% ( 3) 00:16:03.311 5.973 - 5.997: 98.2284% ( 3) 00:16:03.311 6.021 - 6.044: 98.2436% ( 2) 00:16:03.311 6.044 - 6.068: 98.2512% ( 1) 00:16:03.311 6.068 - 6.116: 98.2588% ( 1) 00:16:03.311 6.116 - 6.163: 98.2664% ( 1) 00:16:03.311 6.163 - 6.210: 98.2740% ( 1) 00:16:03.311 6.210 - 6.258: 98.2968% ( 3) 00:16:03.311 6.400 - 6.447: 98.3044% ( 1) 00:16:03.311 6.684 - 6.732: 98.3120% ( 1) 00:16:03.311 6.779 - 6.827: 98.3196% ( 1) 00:16:03.311 6.921 - 6.969: 98.3273% ( 1) 00:16:03.311 7.159 - 7.206: 98.3425% ( 2) 00:16:03.311 7.253 - 7.301: 98.3501% ( 1) 00:16:03.311 7.301 - 7.348: 98.3653% ( 2) 00:16:03.311 7.348 - 7.396: 98.3729% ( 1) 00:16:03.311 7.396 - 7.443: 98.3881% ( 2) 00:16:03.311 7.443 - 7.490: 98.3957% ( 1) 00:16:03.311 7.490 - 7.538: 98.4033% ( 1) 00:16:03.311 7.538 - 7.585: 98.4109% ( 1) 00:16:03.311 7.585 - 7.633: 98.4185% ( 1) 00:16:03.311 7.633 - 7.680: 98.4261% ( 1) 00:16:03.311 7.680 - 7.727: 98.4337% ( 1) 00:16:03.311 7.727 - 7.775: 98.4413% ( 1) 00:16:03.311 7.775 - 7.822: 98.4489% ( 1) 00:16:03.311 7.822 - 7.870: 98.4565% ( 1) 00:16:03.311 7.870 - 7.917: 98.4717% ( 2) 00:16:03.311 7.964 - 8.012: 98.4793% ( 1) 00:16:03.311 8.012 - 8.059: 98.4869% ( 1) 00:16:03.311 8.154 - 8.201: 98.4945% ( 1) 00:16:03.311 8.201 - 8.249: 98.5021% ( 1) 00:16:03.311 8.249 - 8.296: 98.5097% ( 1) 00:16:03.311 8.344 - 8.391: 98.5249% ( 2) 00:16:03.311 8.391 - 8.439: 98.5325% ( 1) 00:16:03.311 8.439 - 8.486: 98.5554% ( 3) 00:16:03.311 8.533 - 8.581: 98.5706% ( 2) 00:16:03.311 8.628 - 8.676: 98.5782% ( 1) 00:16:03.311 8.818 - 8.865: 98.5858% ( 1) 00:16:03.311 8.865 - 8.913: 98.5934% ( 1) 00:16:03.311 9.007 - 9.055: 98.6010% ( 1) 00:16:03.311 9.055 - 9.102: 98.6086% ( 1) 00:16:03.311 9.102 - 9.150: 98.6238% ( 2) 00:16:03.311 9.150 - 9.197: 98.6314% ( 1) 00:16:03.311 9.244 - 9.292: 98.6466% ( 2) 00:16:03.311 9.387 - 9.434: 98.6618% ( 2) 00:16:03.311 10.145 - 10.193: 98.6694% ( 1) 00:16:03.311 10.193 - 10.240: 98.6770% ( 1) 00:16:03.311 10.287 - 10.335: 98.6846% ( 1) 00:16:03.311 10.335 - 10.382: 98.6922% ( 1) 00:16:03.311 10.572 - 10.619: 98.6998% ( 1) 00:16:03.311 10.809 - 10.856: 98.7074% ( 1) 00:16:03.311 10.856 - 10.904: 98.7150% ( 1) 00:16:03.311 11.046 - 11.093: 98.7226% ( 1) 00:16:03.311 11.188 - 11.236: 98.7302% ( 1) 00:16:03.311 11.283 - 11.330: 98.7378% ( 1) 00:16:03.311 11.473 - 11.520: 98.7454% ( 1) 00:16:03.311 11.615 - 11.662: 98.7530% ( 1) 00:16:03.311 11.662 - 11.710: 98.7606% ( 1) 00:16:03.311 11.947 - 11.994: 98.7682% ( 1) 00:16:03.311 12.231 - 12.326: 98.7759% ( 1) 00:16:03.311 12.421 - 12.516: 98.7835% ( 1) 00:16:03.311 12.516 - 12.610: 98.7911% ( 1) 00:16:03.311 12.800 - 12.895: 98.8063% ( 2) 00:16:03.311 12.895 - 12.990: 98.8139% ( 1) 00:16:03.311 12.990 - 13.084: 98.8215% ( 1) 00:16:03.311 13.179 - 13.274: 98.8291% ( 1) 00:16:03.311 13.559 - 13.653: 98.8519% ( 3) 00:16:03.311 13.653 - 13.748: 98.8595% ( 1) 00:16:03.311 13.748 - 13.843: 98.8671% ( 1) 00:16:03.311 14.033 - 14.127: 98.8823% ( 2) 00:16:03.311 14.317 - 14.412: 98.8899% ( 1) 00:16:03.311 15.834 - 15.929: 98.8975% ( 1) 00:16:03.311 17.067 - 17.161: 98.9051% ( 1) 00:16:03.311 17.161 - 17.256: 98.9203% ( 2) 00:16:03.311 17.256 - 17.351: 98.9279% ( 1) 00:16:03.311 17.351 - 17.446: 98.9507% ( 3) 00:16:03.311 17.446 - 17.541: 98.9659% ( 2) 00:16:03.311 17.541 - 17.636: 98.9964% ( 4) 00:16:03.311 17.636 - 17.730: 99.0344% ( 5) 00:16:03.311 17.730 - 17.825: 99.1104% ( 10) 00:16:03.311 17.825 - 17.920: 99.1484% ( 5) 00:16:03.311 17.920 - 18.015: 99.2168% ( 9) 00:16:03.311 18.015 - 18.110: 99.2853% ( 9) 00:16:03.311 18.110 - 18.204: 99.3461% ( 8) 00:16:03.311 18.204 - 18.299: 99.4145% ( 9) 00:16:03.311 18.299 - 18.394: 99.4754% ( 8) 00:16:03.311 18.394 - 18.489: 99.5666% ( 12) 00:16:03.311 18.489 - 18.584: 99.6426% ( 10) 00:16:03.311 18.584 - 18.679: 99.6883% ( 6) 00:16:03.311 18.679 - 18.773: 99.7263% ( 5) 00:16:03.311 18.773 - 18.868: 99.7643% ( 5) 00:16:03.311 18.868 - 18.963: 99.7871% ( 3) 00:16:03.311 18.963 - 19.058: 99.8023% ( 2) 00:16:03.311 19.058 - 19.153: 99.8099% ( 1) 00:16:03.311 19.153 - 19.247: 99.8251% ( 2) 00:16:03.311 19.532 - 19.627: 99.8327% ( 1) 00:16:03.311 19.627 - 19.721: 99.8403% ( 1) 00:16:03.311 20.006 - 20.101: 99.8479% ( 1) 00:16:03.311 20.196 - 20.290: 99.8555% ( 1) 00:16:03.311 20.575 - 20.670: 99.8631% ( 1) 00:16:03.311 20.954 - 21.049: 99.8707% ( 1) 00:16:03.311 23.040 - 23.135: 99.8783% ( 1) 00:16:03.311 23.704 - 23.799: 99.8859% ( 1) 00:16:03.311 23.988 - 24.083: 99.8936% ( 1) 00:16:03.311 24.273 - 24.462: 99.9012% ( 1) 00:16:03.311 26.738 - 26.927: 99.9088% ( 1) 00:16:03.311 27.876 - 28.065: 99.9164% ( 1) 00:16:03.311 3980.705 - 4004.978: 99.9924% ( 10) 00:16:03.311 4004.978 - 4029.250: 100.0000% ( 1) 00:16:03.311 00:16:03.311 Complete histogram 00:16:03.311 ================== 00:16:03.311 Range in us Cumulative Count 00:16:03.311 2.062 - 2.074: 0.0380% ( 5) 00:16:03.312 2.074 - 2.086: 9.2077% ( 1206) 00:16:03.312 2.086 - 2.098: 23.8519% ( 1926) 00:16:03.312 2.098 - 2.110: 26.3686% ( 331) 00:16:03.312 2.110 - 2.121: 47.5441% ( 2785) 00:16:03.312 2.121 - 2.133: 56.7290% ( 1208) 00:16:03.312 2.133 - 2.145: 58.4778% ( 230) 00:16:03.312 2.145 - 2.157: 64.3172% ( 768) 00:16:03.312 2.157 - 2.169: 68.1417% ( 503) 00:16:03.312 2.169 - 2.181: 69.9437% ( 237) 00:16:03.312 2.181 - 2.193: 76.4523% ( 856) 00:16:03.312 2.193 - 2.204: 79.3263% ( 378) 00:16:03.312 2.204 - 2.216: 80.1627% ( 110) 00:16:03.312 2.216 - 2.228: 84.1317% ( 522) 00:16:03.312 2.228 - 2.240: 86.5572% ( 319) 00:16:03.312 2.240 - 2.252: 87.2795% ( 95) 00:16:03.312 2.252 - 2.264: 91.1268% ( 506) 00:16:03.312 2.264 - 2.276: 92.9440% ( 239) 00:16:03.312 2.276 - 2.287: 93.4155% ( 62) 00:16:03.312 2.287 - 2.299: 94.4951% ( 142) 00:16:03.312 2.299 - 2.311: 95.1110% ( 81) 00:16:03.312 2.311 - 2.323: 95.3011% ( 25) 00:16:03.312 2.323 - 2.335: 95.4760% ( 23) 00:16:03.312 2.335 - 2.347: 95.6204% ( 19) 00:16:03.312 2.347 - 2.359: 95.7649% ( 19) 00:16:03.312 2.359 - 2.370: 96.0310% ( 35) 00:16:03.312 2.370 - 2.382: 96.3808% ( 46) 00:16:03.312 2.382 - 2.394: 96.5861% ( 27) 00:16:03.312 2.394 - 2.406: 96.8066% ( 29) 00:16:03.312 2.406 - 2.418: 96.9358% ( 17) 00:16:03.312 2.418 - 2.430: 97.0879% ( 20) 00:16:03.312 2.430 - 2.441: 97.3084% ( 29) 00:16:03.312 2.441 - 2.453: 97.4985% ( 25) 00:16:03.312 2.453 - 2.465: 97.6734% ( 23) 00:16:03.312 2.465 - 2.477: 97.8634% ( 25) 00:16:03.312 2.477 - 2.489: 98.0231% ( 21) 00:16:03.312 2.489 - 2.501: 98.1144% ( 12) 00:16:03.312 2.501 - 2.513: 98.2284% ( 15) 00:16:03.312 2.513 - 2.524: 98.3044% ( 10) 00:16:03.312 2.524 - 2.536: 98.3881% ( 11) 00:16:03.312 2.536 - 2.548: 98.4109% ( 3) 00:16:03.312 2.548 - 2.560: 98.4717% ( 8) 00:16:03.312 2.560 - 2.572: 98.4793% ( 1) 00:16:03.312 2.572 - 2.584: 98.4945% ( 2) 00:16:03.312 2.584 - 2.596: 98.5021% ( 1) 00:16:03.312 2.619 - 2.631: 98.5097% ( 1) 00:16:03.312 2.655 - 2.667: 98.5173% ( 1) 00:16:03.312 2.679 - 2.690: 98.5249% ( 1) 00:16:03.312 2.690 - 2.702: 98.5325% ( 1) 00:16:03.312 2.714 - 2.726: 98.5401% ( 1) 00:16:03.312 2.726 - 2.738: 98.5477% ( 1) 00:16:03.312 2.939 - 2.951: 98.5554% ( 1) 00:16:03.312 3.342 - 3.366: 98.5630% ( 1) 00:16:03.312 3.366 - 3.390: 98.5782% ( 2) 00:16:03.312 3.413 - 3.437: 98.5858% ( 1) 00:16:03.312 3.437 - 3.461: 98.6010% ( 2) 00:16:03.312 3.484 - 3.508: 98.6086% ( 1) 00:16:03.312 3.508 - 3.532: 98.6238% ( 2) 00:16:03.312 3.556 - 3.579: 98.6314% ( 1) 00:16:03.312 3.603 - 3.627: 98.6466% ( 2) 00:16:03.312 3.650 - 3.674: 98.6542% ( 1) 00:16:03.312 3.674 - 3.698: 98.6694% ( 2) 00:16:03.312 3.721 - 3.745: 98.6846% ( 2) 00:16:03.312 3.745 - 3.769: 98.6922% ( 1) 00:16:03.312 3.769 - 3.793: 98.6998% ( 1) 00:16:03.312 3.816 - 3.840: 9[2024-05-15 15:34:16.251983] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:03.312 8.7074% ( 1) 00:16:03.312 3.840 - 3.864: 98.7150% ( 1) 00:16:03.312 3.864 - 3.887: 98.7226% ( 1) 00:16:03.312 3.935 - 3.959: 98.7302% ( 1) 00:16:03.312 3.982 - 4.006: 98.7378% ( 1) 00:16:03.312 4.006 - 4.030: 98.7454% ( 1) 00:16:03.312 4.361 - 4.385: 98.7530% ( 1) 00:16:03.312 5.594 - 5.618: 98.7606% ( 1) 00:16:03.312 6.116 - 6.163: 98.7682% ( 1) 00:16:03.312 6.305 - 6.353: 98.7835% ( 2) 00:16:03.312 6.400 - 6.447: 98.7911% ( 1) 00:16:03.312 6.495 - 6.542: 98.7987% ( 1) 00:16:03.312 6.542 - 6.590: 98.8139% ( 2) 00:16:03.312 6.732 - 6.779: 98.8215% ( 1) 00:16:03.312 6.779 - 6.827: 98.8291% ( 1) 00:16:03.312 7.111 - 7.159: 98.8367% ( 1) 00:16:03.312 7.159 - 7.206: 98.8519% ( 2) 00:16:03.312 7.680 - 7.727: 98.8595% ( 1) 00:16:03.312 7.727 - 7.775: 98.8671% ( 1) 00:16:03.312 7.870 - 7.917: 98.8823% ( 2) 00:16:03.312 7.917 - 7.964: 98.8899% ( 1) 00:16:03.312 8.107 - 8.154: 98.8975% ( 1) 00:16:03.312 8.439 - 8.486: 98.9051% ( 1) 00:16:03.312 8.628 - 8.676: 98.9127% ( 1) 00:16:03.312 10.714 - 10.761: 98.9203% ( 1) 00:16:03.312 15.739 - 15.834: 98.9279% ( 1) 00:16:03.312 15.834 - 15.929: 98.9659% ( 5) 00:16:03.312 15.929 - 16.024: 98.9735% ( 1) 00:16:03.312 16.024 - 16.119: 98.9887% ( 2) 00:16:03.312 16.119 - 16.213: 98.9964% ( 1) 00:16:03.312 16.213 - 16.308: 99.0040% ( 1) 00:16:03.312 16.308 - 16.403: 99.0420% ( 5) 00:16:03.312 16.403 - 16.498: 99.0800% ( 5) 00:16:03.312 16.498 - 16.593: 99.1484% ( 9) 00:16:03.312 16.593 - 16.687: 99.1940% ( 6) 00:16:03.312 16.687 - 16.782: 99.2245% ( 4) 00:16:03.312 16.782 - 16.877: 99.2701% ( 6) 00:16:03.312 16.877 - 16.972: 99.2853% ( 2) 00:16:03.312 16.972 - 17.067: 99.2929% ( 1) 00:16:03.312 17.067 - 17.161: 99.3005% ( 1) 00:16:03.312 17.161 - 17.256: 99.3081% ( 1) 00:16:03.312 17.256 - 17.351: 99.3157% ( 1) 00:16:03.312 17.446 - 17.541: 99.3233% ( 1) 00:16:03.312 17.920 - 18.015: 99.3309% ( 1) 00:16:03.312 18.015 - 18.110: 99.3385% ( 1) 00:16:03.312 18.110 - 18.204: 99.3461% ( 1) 00:16:03.312 18.394 - 18.489: 99.3537% ( 1) 00:16:03.312 18.584 - 18.679: 99.3613% ( 1) 00:16:03.312 21.333 - 21.428: 99.3689% ( 1) 00:16:03.312 3980.705 - 4004.978: 99.9088% ( 71) 00:16:03.312 4004.978 - 4029.250: 100.0000% ( 12) 00:16:03.312 00:16:03.312 15:34:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:03.312 15:34:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:03.312 15:34:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:03.312 15:34:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:03.312 15:34:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:03.569 [ 00:16:03.569 { 00:16:03.569 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:03.569 "subtype": "Discovery", 00:16:03.569 "listen_addresses": [], 00:16:03.569 "allow_any_host": true, 00:16:03.569 "hosts": [] 00:16:03.569 }, 00:16:03.569 { 00:16:03.569 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:03.569 "subtype": "NVMe", 00:16:03.569 "listen_addresses": [ 00:16:03.569 { 00:16:03.569 "trtype": "VFIOUSER", 00:16:03.569 "adrfam": "IPv4", 00:16:03.569 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:03.569 "trsvcid": "0" 00:16:03.569 } 00:16:03.569 ], 00:16:03.569 "allow_any_host": true, 00:16:03.569 "hosts": [], 00:16:03.569 "serial_number": "SPDK1", 00:16:03.569 "model_number": "SPDK bdev Controller", 00:16:03.569 "max_namespaces": 32, 00:16:03.569 "min_cntlid": 1, 00:16:03.569 "max_cntlid": 65519, 00:16:03.569 "namespaces": [ 00:16:03.569 { 00:16:03.569 "nsid": 1, 00:16:03.569 "bdev_name": "Malloc1", 00:16:03.569 "name": "Malloc1", 00:16:03.569 "nguid": "288D5F0AFB294374AD5988C6B7840E3E", 00:16:03.569 "uuid": "288d5f0a-fb29-4374-ad59-88c6b7840e3e" 00:16:03.569 } 00:16:03.569 ] 00:16:03.569 }, 00:16:03.569 { 00:16:03.569 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:03.569 "subtype": "NVMe", 00:16:03.569 "listen_addresses": [ 00:16:03.569 { 00:16:03.569 "trtype": "VFIOUSER", 00:16:03.569 "adrfam": "IPv4", 00:16:03.569 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:03.569 "trsvcid": "0" 00:16:03.569 } 00:16:03.569 ], 00:16:03.569 "allow_any_host": true, 00:16:03.569 "hosts": [], 00:16:03.569 "serial_number": "SPDK2", 00:16:03.569 "model_number": "SPDK bdev Controller", 00:16:03.569 "max_namespaces": 32, 00:16:03.569 "min_cntlid": 1, 00:16:03.569 "max_cntlid": 65519, 00:16:03.569 "namespaces": [ 00:16:03.569 { 00:16:03.569 "nsid": 1, 00:16:03.569 "bdev_name": "Malloc2", 00:16:03.569 "name": "Malloc2", 00:16:03.569 "nguid": "572A879ECF39486DBEB292DB3234BF60", 00:16:03.569 "uuid": "572a879e-cf39-486d-beb2-92db3234bf60" 00:16:03.569 } 00:16:03.569 ] 00:16:03.569 } 00:16:03.569 ] 00:16:03.569 15:34:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:03.569 15:34:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1276138 00:16:03.569 15:34:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:03.569 15:34:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:03.569 15:34:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:16:03.569 15:34:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:03.569 15:34:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:03.569 15:34:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:16:03.569 15:34:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:03.569 15:34:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:03.569 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.827 [2024-05-15 15:34:16.727693] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:03.827 Malloc3 00:16:03.827 15:34:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:04.083 [2024-05-15 15:34:17.073352] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:04.084 15:34:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:04.084 Asynchronous Event Request test 00:16:04.084 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:04.084 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:04.084 Registering asynchronous event callbacks... 00:16:04.084 Starting namespace attribute notice tests for all controllers... 00:16:04.084 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:04.084 aer_cb - Changed Namespace 00:16:04.084 Cleaning up... 00:16:04.342 [ 00:16:04.342 { 00:16:04.342 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:04.342 "subtype": "Discovery", 00:16:04.342 "listen_addresses": [], 00:16:04.342 "allow_any_host": true, 00:16:04.342 "hosts": [] 00:16:04.342 }, 00:16:04.342 { 00:16:04.342 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:04.342 "subtype": "NVMe", 00:16:04.342 "listen_addresses": [ 00:16:04.342 { 00:16:04.342 "trtype": "VFIOUSER", 00:16:04.342 "adrfam": "IPv4", 00:16:04.342 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:04.342 "trsvcid": "0" 00:16:04.342 } 00:16:04.342 ], 00:16:04.342 "allow_any_host": true, 00:16:04.342 "hosts": [], 00:16:04.342 "serial_number": "SPDK1", 00:16:04.342 "model_number": "SPDK bdev Controller", 00:16:04.342 "max_namespaces": 32, 00:16:04.342 "min_cntlid": 1, 00:16:04.342 "max_cntlid": 65519, 00:16:04.342 "namespaces": [ 00:16:04.342 { 00:16:04.342 "nsid": 1, 00:16:04.342 "bdev_name": "Malloc1", 00:16:04.342 "name": "Malloc1", 00:16:04.342 "nguid": "288D5F0AFB294374AD5988C6B7840E3E", 00:16:04.342 "uuid": "288d5f0a-fb29-4374-ad59-88c6b7840e3e" 00:16:04.342 }, 00:16:04.342 { 00:16:04.342 "nsid": 2, 00:16:04.342 "bdev_name": "Malloc3", 00:16:04.342 "name": "Malloc3", 00:16:04.342 "nguid": "FD442CA9537C4FA8BDB01C3BE193A3F5", 00:16:04.342 "uuid": "fd442ca9-537c-4fa8-bdb0-1c3be193a3f5" 00:16:04.342 } 00:16:04.342 ] 00:16:04.342 }, 00:16:04.342 { 00:16:04.342 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:04.342 "subtype": "NVMe", 00:16:04.342 "listen_addresses": [ 00:16:04.342 { 00:16:04.342 "trtype": "VFIOUSER", 00:16:04.342 "adrfam": "IPv4", 00:16:04.342 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:04.342 "trsvcid": "0" 00:16:04.342 } 00:16:04.342 ], 00:16:04.342 "allow_any_host": true, 00:16:04.342 "hosts": [], 00:16:04.342 "serial_number": "SPDK2", 00:16:04.342 "model_number": "SPDK bdev Controller", 00:16:04.342 "max_namespaces": 32, 00:16:04.342 "min_cntlid": 1, 00:16:04.342 "max_cntlid": 65519, 00:16:04.342 "namespaces": [ 00:16:04.342 { 00:16:04.342 "nsid": 1, 00:16:04.342 "bdev_name": "Malloc2", 00:16:04.342 "name": "Malloc2", 00:16:04.342 "nguid": "572A879ECF39486DBEB292DB3234BF60", 00:16:04.342 "uuid": "572a879e-cf39-486d-beb2-92db3234bf60" 00:16:04.342 } 00:16:04.342 ] 00:16:04.342 } 00:16:04.342 ] 00:16:04.342 15:34:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1276138 00:16:04.342 15:34:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:04.342 15:34:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:04.342 15:34:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:04.342 15:34:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:04.342 [2024-05-15 15:34:17.346456] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:16:04.342 [2024-05-15 15:34:17.346499] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1276266 ] 00:16:04.342 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.342 [2024-05-15 15:34:17.362797] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:04.342 [2024-05-15 15:34:17.380411] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:04.342 [2024-05-15 15:34:17.390253] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:04.342 [2024-05-15 15:34:17.390292] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fcfe1abf000 00:16:04.342 [2024-05-15 15:34:17.391255] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:04.342 [2024-05-15 15:34:17.392261] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:04.342 [2024-05-15 15:34:17.393278] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:04.342 [2024-05-15 15:34:17.394270] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:04.342 [2024-05-15 15:34:17.395281] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:04.342 [2024-05-15 15:34:17.396292] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:04.342 [2024-05-15 15:34:17.397305] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:04.342 [2024-05-15 15:34:17.398313] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:04.342 [2024-05-15 15:34:17.399343] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:04.342 [2024-05-15 15:34:17.399366] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fcfe0870000 00:16:04.342 [2024-05-15 15:34:17.400493] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:04.342 [2024-05-15 15:34:17.415892] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:04.342 [2024-05-15 15:34:17.415929] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:04.342 [2024-05-15 15:34:17.421028] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:04.342 [2024-05-15 15:34:17.421082] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:04.342 [2024-05-15 15:34:17.421173] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:04.342 [2024-05-15 15:34:17.421214] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:04.342 [2024-05-15 15:34:17.421234] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:04.342 [2024-05-15 15:34:17.422035] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:04.342 [2024-05-15 15:34:17.422055] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:04.342 [2024-05-15 15:34:17.422067] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:04.342 [2024-05-15 15:34:17.423043] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:04.342 [2024-05-15 15:34:17.423062] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:04.343 [2024-05-15 15:34:17.423075] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:04.343 [2024-05-15 15:34:17.424047] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:04.343 [2024-05-15 15:34:17.424066] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:04.343 [2024-05-15 15:34:17.425051] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:04.343 [2024-05-15 15:34:17.425071] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:04.343 [2024-05-15 15:34:17.425080] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:04.343 [2024-05-15 15:34:17.425091] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:04.343 [2024-05-15 15:34:17.425200] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:04.343 [2024-05-15 15:34:17.425208] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:04.343 [2024-05-15 15:34:17.425238] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:04.343 [2024-05-15 15:34:17.426057] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:04.343 [2024-05-15 15:34:17.427068] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:04.343 [2024-05-15 15:34:17.428078] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:04.343 [2024-05-15 15:34:17.429068] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:04.343 [2024-05-15 15:34:17.429149] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:04.343 [2024-05-15 15:34:17.430084] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:04.343 [2024-05-15 15:34:17.430103] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:04.343 [2024-05-15 15:34:17.430112] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:04.343 [2024-05-15 15:34:17.430135] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:04.343 [2024-05-15 15:34:17.430148] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:04.343 [2024-05-15 15:34:17.430171] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:04.343 [2024-05-15 15:34:17.430181] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:04.343 [2024-05-15 15:34:17.430224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:04.343 [2024-05-15 15:34:17.438230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:04.343 [2024-05-15 15:34:17.438253] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:04.343 [2024-05-15 15:34:17.438263] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:04.343 [2024-05-15 15:34:17.438272] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:04.343 [2024-05-15 15:34:17.438280] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:04.343 [2024-05-15 15:34:17.438293] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:04.343 [2024-05-15 15:34:17.438303] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:04.343 [2024-05-15 15:34:17.438311] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:04.343 [2024-05-15 15:34:17.438325] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:04.343 [2024-05-15 15:34:17.438341] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:04.601 [2024-05-15 15:34:17.446227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:04.601 [2024-05-15 15:34:17.446253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.601 [2024-05-15 15:34:17.446267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.601 [2024-05-15 15:34:17.446279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.601 [2024-05-15 15:34:17.446292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.601 [2024-05-15 15:34:17.446305] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:04.601 [2024-05-15 15:34:17.446324] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:04.601 [2024-05-15 15:34:17.446339] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:04.601 [2024-05-15 15:34:17.454226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:04.601 [2024-05-15 15:34:17.454245] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:04.601 [2024-05-15 15:34:17.454255] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:04.601 [2024-05-15 15:34:17.454282] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:04.601 [2024-05-15 15:34:17.454294] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:04.601 [2024-05-15 15:34:17.454308] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:04.601 [2024-05-15 15:34:17.462229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:04.601 [2024-05-15 15:34:17.462303] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:04.601 [2024-05-15 15:34:17.462319] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:04.601 [2024-05-15 15:34:17.462332] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:04.601 [2024-05-15 15:34:17.462341] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:04.601 [2024-05-15 15:34:17.462352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:04.601 [2024-05-15 15:34:17.470227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:04.601 [2024-05-15 15:34:17.470251] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:04.601 [2024-05-15 15:34:17.470282] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:04.601 [2024-05-15 15:34:17.470297] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:04.601 [2024-05-15 15:34:17.470309] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:04.601 [2024-05-15 15:34:17.470317] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:04.601 [2024-05-15 15:34:17.470328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:04.601 [2024-05-15 15:34:17.478228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:04.601 [2024-05-15 15:34:17.478256] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:04.601 [2024-05-15 15:34:17.478272] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:04.601 [2024-05-15 15:34:17.478288] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:04.601 [2024-05-15 15:34:17.478297] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:04.601 [2024-05-15 15:34:17.478307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:04.601 [2024-05-15 15:34:17.486229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:04.601 [2024-05-15 15:34:17.486259] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:04.601 [2024-05-15 15:34:17.486272] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:16:04.601 [2024-05-15 15:34:17.486289] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:16:04.601 [2024-05-15 15:34:17.486300] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:04.601 [2024-05-15 15:34:17.486309] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:16:04.601 [2024-05-15 15:34:17.486317] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:16:04.601 [2024-05-15 15:34:17.486325] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:16:04.601 [2024-05-15 15:34:17.486334] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:16:04.602 [2024-05-15 15:34:17.486365] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:04.602 [2024-05-15 15:34:17.494230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:04.602 [2024-05-15 15:34:17.494257] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:04.602 [2024-05-15 15:34:17.502227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:04.602 [2024-05-15 15:34:17.502252] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:04.602 [2024-05-15 15:34:17.510229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:04.602 [2024-05-15 15:34:17.510255] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:04.602 [2024-05-15 15:34:17.518225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:04.602 [2024-05-15 15:34:17.518252] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:04.602 [2024-05-15 15:34:17.518263] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:04.602 [2024-05-15 15:34:17.518269] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:04.602 [2024-05-15 15:34:17.518276] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:04.602 [2024-05-15 15:34:17.518286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:04.602 [2024-05-15 15:34:17.518298] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:04.602 [2024-05-15 15:34:17.518306] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:04.602 [2024-05-15 15:34:17.518320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:04.602 [2024-05-15 15:34:17.518333] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:04.602 [2024-05-15 15:34:17.518341] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:04.602 [2024-05-15 15:34:17.518350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:04.602 [2024-05-15 15:34:17.518362] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:04.602 [2024-05-15 15:34:17.518370] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:04.602 [2024-05-15 15:34:17.518380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:04.602 [2024-05-15 15:34:17.526240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:04.602 [2024-05-15 15:34:17.526268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:04.602 [2024-05-15 15:34:17.526284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:04.602 [2024-05-15 15:34:17.526298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:04.602 ===================================================== 00:16:04.602 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:04.602 ===================================================== 00:16:04.602 Controller Capabilities/Features 00:16:04.602 ================================ 00:16:04.602 Vendor ID: 4e58 00:16:04.602 Subsystem Vendor ID: 4e58 00:16:04.602 Serial Number: SPDK2 00:16:04.602 Model Number: SPDK bdev Controller 00:16:04.602 Firmware Version: 24.05 00:16:04.602 Recommended Arb Burst: 6 00:16:04.602 IEEE OUI Identifier: 8d 6b 50 00:16:04.602 Multi-path I/O 00:16:04.602 May have multiple subsystem ports: Yes 00:16:04.602 May have multiple controllers: Yes 00:16:04.602 Associated with SR-IOV VF: No 00:16:04.602 Max Data Transfer Size: 131072 00:16:04.602 Max Number of Namespaces: 32 00:16:04.602 Max Number of I/O Queues: 127 00:16:04.602 NVMe Specification Version (VS): 1.3 00:16:04.602 NVMe Specification Version (Identify): 1.3 00:16:04.602 Maximum Queue Entries: 256 00:16:04.602 Contiguous Queues Required: Yes 00:16:04.602 Arbitration Mechanisms Supported 00:16:04.602 Weighted Round Robin: Not Supported 00:16:04.602 Vendor Specific: Not Supported 00:16:04.602 Reset Timeout: 15000 ms 00:16:04.602 Doorbell Stride: 4 bytes 00:16:04.602 NVM Subsystem Reset: Not Supported 00:16:04.602 Command Sets Supported 00:16:04.602 NVM Command Set: Supported 00:16:04.602 Boot Partition: Not Supported 00:16:04.602 Memory Page Size Minimum: 4096 bytes 00:16:04.602 Memory Page Size Maximum: 4096 bytes 00:16:04.602 Persistent Memory Region: Not Supported 00:16:04.602 Optional Asynchronous Events Supported 00:16:04.602 Namespace Attribute Notices: Supported 00:16:04.602 Firmware Activation Notices: Not Supported 00:16:04.602 ANA Change Notices: Not Supported 00:16:04.602 PLE Aggregate Log Change Notices: Not Supported 00:16:04.602 LBA Status Info Alert Notices: Not Supported 00:16:04.602 EGE Aggregate Log Change Notices: Not Supported 00:16:04.602 Normal NVM Subsystem Shutdown event: Not Supported 00:16:04.602 Zone Descriptor Change Notices: Not Supported 00:16:04.602 Discovery Log Change Notices: Not Supported 00:16:04.602 Controller Attributes 00:16:04.602 128-bit Host Identifier: Supported 00:16:04.602 Non-Operational Permissive Mode: Not Supported 00:16:04.602 NVM Sets: Not Supported 00:16:04.602 Read Recovery Levels: Not Supported 00:16:04.602 Endurance Groups: Not Supported 00:16:04.602 Predictable Latency Mode: Not Supported 00:16:04.602 Traffic Based Keep ALive: Not Supported 00:16:04.602 Namespace Granularity: Not Supported 00:16:04.602 SQ Associations: Not Supported 00:16:04.602 UUID List: Not Supported 00:16:04.602 Multi-Domain Subsystem: Not Supported 00:16:04.602 Fixed Capacity Management: Not Supported 00:16:04.602 Variable Capacity Management: Not Supported 00:16:04.602 Delete Endurance Group: Not Supported 00:16:04.602 Delete NVM Set: Not Supported 00:16:04.602 Extended LBA Formats Supported: Not Supported 00:16:04.602 Flexible Data Placement Supported: Not Supported 00:16:04.602 00:16:04.602 Controller Memory Buffer Support 00:16:04.602 ================================ 00:16:04.602 Supported: No 00:16:04.602 00:16:04.602 Persistent Memory Region Support 00:16:04.602 ================================ 00:16:04.602 Supported: No 00:16:04.602 00:16:04.602 Admin Command Set Attributes 00:16:04.602 ============================ 00:16:04.602 Security Send/Receive: Not Supported 00:16:04.602 Format NVM: Not Supported 00:16:04.602 Firmware Activate/Download: Not Supported 00:16:04.602 Namespace Management: Not Supported 00:16:04.602 Device Self-Test: Not Supported 00:16:04.602 Directives: Not Supported 00:16:04.602 NVMe-MI: Not Supported 00:16:04.602 Virtualization Management: Not Supported 00:16:04.602 Doorbell Buffer Config: Not Supported 00:16:04.602 Get LBA Status Capability: Not Supported 00:16:04.602 Command & Feature Lockdown Capability: Not Supported 00:16:04.602 Abort Command Limit: 4 00:16:04.602 Async Event Request Limit: 4 00:16:04.602 Number of Firmware Slots: N/A 00:16:04.602 Firmware Slot 1 Read-Only: N/A 00:16:04.602 Firmware Activation Without Reset: N/A 00:16:04.602 Multiple Update Detection Support: N/A 00:16:04.602 Firmware Update Granularity: No Information Provided 00:16:04.602 Per-Namespace SMART Log: No 00:16:04.602 Asymmetric Namespace Access Log Page: Not Supported 00:16:04.602 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:04.603 Command Effects Log Page: Supported 00:16:04.603 Get Log Page Extended Data: Supported 00:16:04.603 Telemetry Log Pages: Not Supported 00:16:04.603 Persistent Event Log Pages: Not Supported 00:16:04.603 Supported Log Pages Log Page: May Support 00:16:04.603 Commands Supported & Effects Log Page: Not Supported 00:16:04.603 Feature Identifiers & Effects Log Page:May Support 00:16:04.603 NVMe-MI Commands & Effects Log Page: May Support 00:16:04.603 Data Area 4 for Telemetry Log: Not Supported 00:16:04.603 Error Log Page Entries Supported: 128 00:16:04.603 Keep Alive: Supported 00:16:04.603 Keep Alive Granularity: 10000 ms 00:16:04.603 00:16:04.603 NVM Command Set Attributes 00:16:04.603 ========================== 00:16:04.603 Submission Queue Entry Size 00:16:04.603 Max: 64 00:16:04.603 Min: 64 00:16:04.603 Completion Queue Entry Size 00:16:04.603 Max: 16 00:16:04.603 Min: 16 00:16:04.603 Number of Namespaces: 32 00:16:04.603 Compare Command: Supported 00:16:04.603 Write Uncorrectable Command: Not Supported 00:16:04.603 Dataset Management Command: Supported 00:16:04.603 Write Zeroes Command: Supported 00:16:04.603 Set Features Save Field: Not Supported 00:16:04.603 Reservations: Not Supported 00:16:04.603 Timestamp: Not Supported 00:16:04.603 Copy: Supported 00:16:04.603 Volatile Write Cache: Present 00:16:04.603 Atomic Write Unit (Normal): 1 00:16:04.603 Atomic Write Unit (PFail): 1 00:16:04.603 Atomic Compare & Write Unit: 1 00:16:04.603 Fused Compare & Write: Supported 00:16:04.603 Scatter-Gather List 00:16:04.603 SGL Command Set: Supported (Dword aligned) 00:16:04.603 SGL Keyed: Not Supported 00:16:04.603 SGL Bit Bucket Descriptor: Not Supported 00:16:04.603 SGL Metadata Pointer: Not Supported 00:16:04.603 Oversized SGL: Not Supported 00:16:04.603 SGL Metadata Address: Not Supported 00:16:04.603 SGL Offset: Not Supported 00:16:04.603 Transport SGL Data Block: Not Supported 00:16:04.603 Replay Protected Memory Block: Not Supported 00:16:04.603 00:16:04.603 Firmware Slot Information 00:16:04.603 ========================= 00:16:04.603 Active slot: 1 00:16:04.603 Slot 1 Firmware Revision: 24.05 00:16:04.603 00:16:04.603 00:16:04.603 Commands Supported and Effects 00:16:04.603 ============================== 00:16:04.603 Admin Commands 00:16:04.603 -------------- 00:16:04.603 Get Log Page (02h): Supported 00:16:04.603 Identify (06h): Supported 00:16:04.603 Abort (08h): Supported 00:16:04.603 Set Features (09h): Supported 00:16:04.603 Get Features (0Ah): Supported 00:16:04.603 Asynchronous Event Request (0Ch): Supported 00:16:04.603 Keep Alive (18h): Supported 00:16:04.603 I/O Commands 00:16:04.603 ------------ 00:16:04.603 Flush (00h): Supported LBA-Change 00:16:04.603 Write (01h): Supported LBA-Change 00:16:04.603 Read (02h): Supported 00:16:04.603 Compare (05h): Supported 00:16:04.603 Write Zeroes (08h): Supported LBA-Change 00:16:04.603 Dataset Management (09h): Supported LBA-Change 00:16:04.603 Copy (19h): Supported LBA-Change 00:16:04.603 Unknown (79h): Supported LBA-Change 00:16:04.603 Unknown (7Ah): Supported 00:16:04.603 00:16:04.603 Error Log 00:16:04.603 ========= 00:16:04.603 00:16:04.603 Arbitration 00:16:04.603 =========== 00:16:04.603 Arbitration Burst: 1 00:16:04.603 00:16:04.603 Power Management 00:16:04.603 ================ 00:16:04.603 Number of Power States: 1 00:16:04.603 Current Power State: Power State #0 00:16:04.603 Power State #0: 00:16:04.603 Max Power: 0.00 W 00:16:04.603 Non-Operational State: Operational 00:16:04.603 Entry Latency: Not Reported 00:16:04.603 Exit Latency: Not Reported 00:16:04.603 Relative Read Throughput: 0 00:16:04.603 Relative Read Latency: 0 00:16:04.603 Relative Write Throughput: 0 00:16:04.603 Relative Write Latency: 0 00:16:04.603 Idle Power: Not Reported 00:16:04.603 Active Power: Not Reported 00:16:04.603 Non-Operational Permissive Mode: Not Supported 00:16:04.603 00:16:04.603 Health Information 00:16:04.603 ================== 00:16:04.603 Critical Warnings: 00:16:04.603 Available Spare Space: OK 00:16:04.603 Temperature: OK 00:16:04.603 Device Reliability: OK 00:16:04.603 Read Only: No 00:16:04.603 Volatile Memory Backup: OK 00:16:04.603 Current Temperature: 0 Kelvin (-2[2024-05-15 15:34:17.526423] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:04.603 [2024-05-15 15:34:17.534228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:04.603 [2024-05-15 15:34:17.534275] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:16:04.603 [2024-05-15 15:34:17.534292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.603 [2024-05-15 15:34:17.534304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.603 [2024-05-15 15:34:17.534314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.603 [2024-05-15 15:34:17.534323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.603 [2024-05-15 15:34:17.534406] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:04.603 [2024-05-15 15:34:17.534427] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:04.603 [2024-05-15 15:34:17.535404] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:04.603 [2024-05-15 15:34:17.535474] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:16:04.603 [2024-05-15 15:34:17.535489] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:16:04.603 [2024-05-15 15:34:17.536414] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:04.603 [2024-05-15 15:34:17.536437] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:16:04.603 [2024-05-15 15:34:17.536488] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:04.603 [2024-05-15 15:34:17.537701] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:04.603 73 Celsius) 00:16:04.603 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:04.603 Available Spare: 0% 00:16:04.603 Available Spare Threshold: 0% 00:16:04.603 Life Percentage Used: 0% 00:16:04.603 Data Units Read: 0 00:16:04.603 Data Units Written: 0 00:16:04.603 Host Read Commands: 0 00:16:04.603 Host Write Commands: 0 00:16:04.603 Controller Busy Time: 0 minutes 00:16:04.603 Power Cycles: 0 00:16:04.603 Power On Hours: 0 hours 00:16:04.603 Unsafe Shutdowns: 0 00:16:04.603 Unrecoverable Media Errors: 0 00:16:04.603 Lifetime Error Log Entries: 0 00:16:04.603 Warning Temperature Time: 0 minutes 00:16:04.603 Critical Temperature Time: 0 minutes 00:16:04.603 00:16:04.603 Number of Queues 00:16:04.603 ================ 00:16:04.603 Number of I/O Submission Queues: 127 00:16:04.603 Number of I/O Completion Queues: 127 00:16:04.603 00:16:04.603 Active Namespaces 00:16:04.603 ================= 00:16:04.603 Namespace ID:1 00:16:04.603 Error Recovery Timeout: Unlimited 00:16:04.603 Command Set Identifier: NVM (00h) 00:16:04.603 Deallocate: Supported 00:16:04.603 Deallocated/Unwritten Error: Not Supported 00:16:04.603 Deallocated Read Value: Unknown 00:16:04.603 Deallocate in Write Zeroes: Not Supported 00:16:04.603 Deallocated Guard Field: 0xFFFF 00:16:04.603 Flush: Supported 00:16:04.603 Reservation: Supported 00:16:04.603 Namespace Sharing Capabilities: Multiple Controllers 00:16:04.603 Size (in LBAs): 131072 (0GiB) 00:16:04.603 Capacity (in LBAs): 131072 (0GiB) 00:16:04.603 Utilization (in LBAs): 131072 (0GiB) 00:16:04.603 NGUID: 572A879ECF39486DBEB292DB3234BF60 00:16:04.603 UUID: 572a879e-cf39-486d-beb2-92db3234bf60 00:16:04.603 Thin Provisioning: Not Supported 00:16:04.603 Per-NS Atomic Units: Yes 00:16:04.603 Atomic Boundary Size (Normal): 0 00:16:04.603 Atomic Boundary Size (PFail): 0 00:16:04.603 Atomic Boundary Offset: 0 00:16:04.603 Maximum Single Source Range Length: 65535 00:16:04.603 Maximum Copy Length: 65535 00:16:04.603 Maximum Source Range Count: 1 00:16:04.603 NGUID/EUI64 Never Reused: No 00:16:04.603 Namespace Write Protected: No 00:16:04.603 Number of LBA Formats: 1 00:16:04.603 Current LBA Format: LBA Format #00 00:16:04.603 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:04.603 00:16:04.604 15:34:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:04.604 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.860 [2024-05-15 15:34:17.762104] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:10.116 Initializing NVMe Controllers 00:16:10.116 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:10.116 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:10.117 Initialization complete. Launching workers. 00:16:10.117 ======================================================== 00:16:10.117 Latency(us) 00:16:10.117 Device Information : IOPS MiB/s Average min max 00:16:10.117 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34095.96 133.19 3753.44 1158.11 7592.15 00:16:10.117 ======================================================== 00:16:10.117 Total : 34095.96 133.19 3753.44 1158.11 7592.15 00:16:10.117 00:16:10.117 [2024-05-15 15:34:22.866581] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:10.117 15:34:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:10.117 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.117 [2024-05-15 15:34:23.105193] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:15.375 Initializing NVMe Controllers 00:16:15.375 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:15.375 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:15.375 Initialization complete. Launching workers. 00:16:15.375 ======================================================== 00:16:15.375 Latency(us) 00:16:15.375 Device Information : IOPS MiB/s Average min max 00:16:15.375 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31889.36 124.57 4013.24 1195.78 8313.41 00:16:15.375 ======================================================== 00:16:15.375 Total : 31889.36 124.57 4013.24 1195.78 8313.41 00:16:15.375 00:16:15.375 [2024-05-15 15:34:28.125983] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:15.375 15:34:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:15.375 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.375 [2024-05-15 15:34:28.359025] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:20.651 [2024-05-15 15:34:33.496386] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:20.651 Initializing NVMe Controllers 00:16:20.651 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:20.651 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:20.651 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:20.651 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:20.651 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:20.651 Initialization complete. Launching workers. 00:16:20.651 Starting thread on core 2 00:16:20.651 Starting thread on core 3 00:16:20.651 Starting thread on core 1 00:16:20.651 15:34:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:20.651 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.916 [2024-05-15 15:34:33.824726] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:24.195 [2024-05-15 15:34:36.887508] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:24.195 Initializing NVMe Controllers 00:16:24.195 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:24.195 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:24.195 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:24.195 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:24.195 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:24.195 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:24.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:24.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:24.195 Initialization complete. Launching workers. 00:16:24.195 Starting thread on core 1 with urgent priority queue 00:16:24.195 Starting thread on core 2 with urgent priority queue 00:16:24.195 Starting thread on core 3 with urgent priority queue 00:16:24.195 Starting thread on core 0 with urgent priority queue 00:16:24.195 SPDK bdev Controller (SPDK2 ) core 0: 6623.33 IO/s 15.10 secs/100000 ios 00:16:24.195 SPDK bdev Controller (SPDK2 ) core 1: 6802.67 IO/s 14.70 secs/100000 ios 00:16:24.195 SPDK bdev Controller (SPDK2 ) core 2: 6430.33 IO/s 15.55 secs/100000 ios 00:16:24.195 SPDK bdev Controller (SPDK2 ) core 3: 6532.00 IO/s 15.31 secs/100000 ios 00:16:24.195 ======================================================== 00:16:24.195 00:16:24.195 15:34:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:24.195 EAL: No free 2048 kB hugepages reported on node 1 00:16:24.195 [2024-05-15 15:34:37.197758] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:24.195 Initializing NVMe Controllers 00:16:24.195 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:24.195 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:24.195 Namespace ID: 1 size: 0GB 00:16:24.195 Initialization complete. 00:16:24.195 INFO: using host memory buffer for IO 00:16:24.195 Hello world! 00:16:24.195 [2024-05-15 15:34:37.206818] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:24.195 15:34:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:24.195 EAL: No free 2048 kB hugepages reported on node 1 00:16:24.452 [2024-05-15 15:34:37.505689] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:25.825 Initializing NVMe Controllers 00:16:25.825 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:25.825 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:25.825 Initialization complete. Launching workers. 00:16:25.825 submit (in ns) avg, min, max = 8734.7, 3541.1, 4013558.9 00:16:25.825 complete (in ns) avg, min, max = 26097.6, 2073.3, 5015538.9 00:16:25.825 00:16:25.825 Submit histogram 00:16:25.825 ================ 00:16:25.825 Range in us Cumulative Count 00:16:25.825 3.532 - 3.556: 0.0151% ( 2) 00:16:25.825 3.556 - 3.579: 0.1433% ( 17) 00:16:25.825 3.579 - 3.603: 2.4434% ( 305) 00:16:25.825 3.603 - 3.627: 8.0468% ( 743) 00:16:25.825 3.627 - 3.650: 19.0121% ( 1454) 00:16:25.825 3.650 - 3.674: 30.9804% ( 1587) 00:16:25.825 3.674 - 3.698: 40.2715% ( 1232) 00:16:25.825 3.698 - 3.721: 47.7149% ( 987) 00:16:25.825 3.721 - 3.745: 52.4133% ( 623) 00:16:25.825 3.745 - 3.769: 56.5913% ( 554) 00:16:25.825 3.769 - 3.793: 60.6410% ( 537) 00:16:25.825 3.793 - 3.816: 63.6576% ( 400) 00:16:25.825 3.816 - 3.840: 66.0633% ( 319) 00:16:25.825 3.840 - 3.864: 69.5324% ( 460) 00:16:25.825 3.864 - 3.887: 74.1252% ( 609) 00:16:25.825 3.887 - 3.911: 78.8763% ( 630) 00:16:25.825 3.911 - 3.935: 82.6018% ( 494) 00:16:25.825 3.935 - 3.959: 84.8869% ( 303) 00:16:25.825 3.959 - 3.982: 86.5686% ( 223) 00:16:25.825 3.982 - 4.006: 88.3786% ( 240) 00:16:25.825 4.006 - 4.030: 89.7285% ( 179) 00:16:25.825 4.030 - 4.053: 90.8673% ( 151) 00:16:25.825 4.053 - 4.077: 91.7345% ( 115) 00:16:25.825 4.077 - 4.101: 92.5792% ( 112) 00:16:25.825 4.101 - 4.124: 93.5068% ( 123) 00:16:25.825 4.124 - 4.148: 94.3514% ( 112) 00:16:25.825 4.148 - 4.172: 94.9170% ( 75) 00:16:25.825 4.172 - 4.196: 95.3394% ( 56) 00:16:25.825 4.196 - 4.219: 95.5656% ( 30) 00:16:25.825 4.219 - 4.243: 95.8296% ( 35) 00:16:25.825 4.243 - 4.267: 96.0860% ( 34) 00:16:25.825 4.267 - 4.290: 96.2971% ( 28) 00:16:25.825 4.290 - 4.314: 96.4027% ( 14) 00:16:25.825 4.314 - 4.338: 96.5083% ( 14) 00:16:25.825 4.338 - 4.361: 96.6516% ( 19) 00:16:25.825 4.361 - 4.385: 96.8100% ( 21) 00:16:25.825 4.385 - 4.409: 96.9306% ( 16) 00:16:25.825 4.409 - 4.433: 96.9834% ( 7) 00:16:25.825 4.433 - 4.456: 97.0136% ( 4) 00:16:25.825 4.456 - 4.480: 97.0437% ( 4) 00:16:25.825 4.480 - 4.504: 97.1041% ( 8) 00:16:25.825 4.504 - 4.527: 97.1267% ( 3) 00:16:25.825 4.527 - 4.551: 97.1418% ( 2) 00:16:25.825 4.551 - 4.575: 97.1493% ( 1) 00:16:25.825 4.575 - 4.599: 97.1569% ( 1) 00:16:25.825 4.599 - 4.622: 97.1795% ( 3) 00:16:25.825 4.693 - 4.717: 97.1870% ( 1) 00:16:25.825 4.741 - 4.764: 97.1946% ( 1) 00:16:25.825 4.764 - 4.788: 97.2097% ( 2) 00:16:25.825 4.812 - 4.836: 97.2247% ( 2) 00:16:25.825 4.836 - 4.859: 97.2549% ( 4) 00:16:25.825 4.859 - 4.883: 97.2700% ( 2) 00:16:25.825 4.883 - 4.907: 97.2851% ( 2) 00:16:25.825 4.907 - 4.930: 97.3002% ( 2) 00:16:25.825 4.930 - 4.954: 97.3454% ( 6) 00:16:25.825 4.954 - 4.978: 97.3831% ( 5) 00:16:25.825 4.978 - 5.001: 97.4208% ( 5) 00:16:25.825 5.001 - 5.025: 97.4585% ( 5) 00:16:25.825 5.025 - 5.049: 97.4962% ( 5) 00:16:25.825 5.049 - 5.073: 97.5867% ( 12) 00:16:25.825 5.073 - 5.096: 97.6320% ( 6) 00:16:25.825 5.096 - 5.120: 97.6621% ( 4) 00:16:25.825 5.120 - 5.144: 97.6697% ( 1) 00:16:25.825 5.144 - 5.167: 97.7149% ( 6) 00:16:25.825 5.167 - 5.191: 97.7526% ( 5) 00:16:25.825 5.191 - 5.215: 97.7828% ( 4) 00:16:25.825 5.215 - 5.239: 97.7979% ( 2) 00:16:25.825 5.239 - 5.262: 97.8130% ( 2) 00:16:25.825 5.262 - 5.286: 97.8356% ( 3) 00:16:25.825 5.286 - 5.310: 97.8431% ( 1) 00:16:25.825 5.310 - 5.333: 97.8507% ( 1) 00:16:25.825 5.333 - 5.357: 97.8808% ( 4) 00:16:25.825 5.357 - 5.381: 97.8884% ( 1) 00:16:25.825 5.381 - 5.404: 97.8959% ( 1) 00:16:25.825 5.404 - 5.428: 97.9110% ( 2) 00:16:25.825 5.428 - 5.452: 97.9186% ( 1) 00:16:25.825 5.476 - 5.499: 97.9336% ( 2) 00:16:25.825 5.499 - 5.523: 97.9563% ( 3) 00:16:25.825 5.547 - 5.570: 97.9638% ( 1) 00:16:25.825 5.689 - 5.713: 97.9713% ( 1) 00:16:25.825 5.736 - 5.760: 97.9789% ( 1) 00:16:25.825 5.760 - 5.784: 97.9864% ( 1) 00:16:25.825 5.784 - 5.807: 97.9940% ( 1) 00:16:25.825 5.831 - 5.855: 98.0166% ( 3) 00:16:25.825 5.902 - 5.926: 98.0317% ( 2) 00:16:25.825 5.926 - 5.950: 98.0392% ( 1) 00:16:25.825 5.950 - 5.973: 98.0468% ( 1) 00:16:25.825 6.068 - 6.116: 98.0543% ( 1) 00:16:25.825 6.116 - 6.163: 98.0769% ( 3) 00:16:25.825 6.210 - 6.258: 98.0845% ( 1) 00:16:25.825 6.305 - 6.353: 98.0995% ( 2) 00:16:25.825 6.353 - 6.400: 98.1297% ( 4) 00:16:25.825 6.447 - 6.495: 98.1523% ( 3) 00:16:25.825 6.542 - 6.590: 98.1599% ( 1) 00:16:25.825 6.637 - 6.684: 98.1674% ( 1) 00:16:25.825 6.684 - 6.732: 98.1825% ( 2) 00:16:25.825 6.732 - 6.779: 98.1976% ( 2) 00:16:25.825 6.779 - 6.827: 98.2051% ( 1) 00:16:25.825 6.827 - 6.874: 98.2127% ( 1) 00:16:25.825 6.874 - 6.921: 98.2202% ( 1) 00:16:25.825 6.921 - 6.969: 98.2278% ( 1) 00:16:25.825 6.969 - 7.016: 98.2353% ( 1) 00:16:25.825 7.016 - 7.064: 98.2428% ( 1) 00:16:25.825 7.159 - 7.206: 98.2504% ( 1) 00:16:25.825 7.206 - 7.253: 98.2730% ( 3) 00:16:25.825 7.253 - 7.301: 98.2805% ( 1) 00:16:25.825 7.301 - 7.348: 98.2881% ( 1) 00:16:25.826 7.348 - 7.396: 98.2956% ( 1) 00:16:25.826 7.490 - 7.538: 98.3032% ( 1) 00:16:25.826 7.633 - 7.680: 98.3107% ( 1) 00:16:25.826 7.680 - 7.727: 98.3183% ( 1) 00:16:25.826 7.775 - 7.822: 98.3258% ( 1) 00:16:25.826 7.822 - 7.870: 98.3409% ( 2) 00:16:25.826 7.917 - 7.964: 98.3635% ( 3) 00:16:25.826 8.012 - 8.059: 98.3937% ( 4) 00:16:25.826 8.059 - 8.107: 98.4163% ( 3) 00:16:25.826 8.107 - 8.154: 98.4238% ( 1) 00:16:25.826 8.154 - 8.201: 98.4314% ( 1) 00:16:25.826 8.201 - 8.249: 98.4389% ( 1) 00:16:25.826 8.249 - 8.296: 98.4465% ( 1) 00:16:25.826 8.296 - 8.344: 98.4691% ( 3) 00:16:25.826 8.344 - 8.391: 98.4766% ( 1) 00:16:25.826 8.391 - 8.439: 98.4917% ( 2) 00:16:25.826 8.439 - 8.486: 98.5068% ( 2) 00:16:25.826 8.533 - 8.581: 98.5143% ( 1) 00:16:25.826 8.581 - 8.628: 98.5219% ( 1) 00:16:25.826 8.628 - 8.676: 98.5294% ( 1) 00:16:25.826 8.770 - 8.818: 98.5370% ( 1) 00:16:25.826 8.960 - 9.007: 98.5445% ( 1) 00:16:25.826 9.055 - 9.102: 98.5596% ( 2) 00:16:25.826 9.102 - 9.150: 98.5671% ( 1) 00:16:25.826 9.197 - 9.244: 98.5747% ( 1) 00:16:25.826 9.339 - 9.387: 98.5897% ( 2) 00:16:25.826 9.624 - 9.671: 98.5973% ( 1) 00:16:25.826 9.719 - 9.766: 98.6124% ( 2) 00:16:25.826 9.813 - 9.861: 98.6199% ( 1) 00:16:25.826 9.956 - 10.003: 98.6350% ( 2) 00:16:25.826 10.240 - 10.287: 98.6501% ( 2) 00:16:25.826 10.761 - 10.809: 98.6576% ( 1) 00:16:25.826 10.999 - 11.046: 98.6652% ( 1) 00:16:25.826 11.046 - 11.093: 98.6727% ( 1) 00:16:25.826 11.093 - 11.141: 98.6802% ( 1) 00:16:25.826 11.141 - 11.188: 98.6878% ( 1) 00:16:25.826 11.425 - 11.473: 98.6953% ( 1) 00:16:25.826 11.473 - 11.520: 98.7104% ( 2) 00:16:25.826 11.710 - 11.757: 98.7255% ( 2) 00:16:25.826 11.757 - 11.804: 98.7481% ( 3) 00:16:25.826 11.852 - 11.899: 98.7557% ( 1) 00:16:25.826 11.899 - 11.947: 98.7632% ( 1) 00:16:25.826 11.994 - 12.041: 98.7707% ( 1) 00:16:25.826 12.136 - 12.231: 98.7934% ( 3) 00:16:25.826 12.231 - 12.326: 98.8009% ( 1) 00:16:25.826 12.326 - 12.421: 98.8084% ( 1) 00:16:25.826 12.610 - 12.705: 98.8160% ( 1) 00:16:25.826 12.895 - 12.990: 98.8235% ( 1) 00:16:25.826 13.084 - 13.179: 98.8311% ( 1) 00:16:25.826 13.369 - 13.464: 98.8386% ( 1) 00:16:25.826 13.464 - 13.559: 98.8462% ( 1) 00:16:25.826 13.559 - 13.653: 98.8688% ( 3) 00:16:25.826 13.653 - 13.748: 98.8763% ( 1) 00:16:25.826 13.748 - 13.843: 98.8839% ( 1) 00:16:25.826 13.938 - 14.033: 98.8914% ( 1) 00:16:25.826 14.033 - 14.127: 98.8989% ( 1) 00:16:25.826 14.127 - 14.222: 98.9065% ( 1) 00:16:25.826 14.412 - 14.507: 98.9140% ( 1) 00:16:25.826 14.507 - 14.601: 98.9367% ( 3) 00:16:25.826 14.791 - 14.886: 98.9517% ( 2) 00:16:25.826 14.981 - 15.076: 98.9593% ( 1) 00:16:25.826 15.265 - 15.360: 98.9668% ( 1) 00:16:25.826 15.739 - 15.834: 98.9744% ( 1) 00:16:25.826 15.929 - 16.024: 98.9819% ( 1) 00:16:25.826 16.213 - 16.308: 98.9894% ( 1) 00:16:25.826 17.067 - 17.161: 98.9970% ( 1) 00:16:25.826 17.161 - 17.256: 99.0121% ( 2) 00:16:25.826 17.256 - 17.351: 99.0271% ( 2) 00:16:25.826 17.351 - 17.446: 99.0347% ( 1) 00:16:25.826 17.446 - 17.541: 99.0498% ( 2) 00:16:25.826 17.541 - 17.636: 99.0649% ( 2) 00:16:25.826 17.636 - 17.730: 99.0950% ( 4) 00:16:25.826 17.730 - 17.825: 99.1403% ( 6) 00:16:25.826 17.825 - 17.920: 99.1704% ( 4) 00:16:25.826 17.920 - 18.015: 99.2308% ( 8) 00:16:25.826 18.015 - 18.110: 99.2836% ( 7) 00:16:25.826 18.110 - 18.204: 99.3213% ( 5) 00:16:25.826 18.204 - 18.299: 99.3590% ( 5) 00:16:25.826 18.299 - 18.394: 99.4193% ( 8) 00:16:25.826 18.394 - 18.489: 99.5324% ( 15) 00:16:25.826 18.489 - 18.584: 99.6154% ( 11) 00:16:25.826 18.584 - 18.679: 99.6606% ( 6) 00:16:25.826 18.679 - 18.773: 99.6833% ( 3) 00:16:25.826 18.773 - 18.868: 99.6908% ( 1) 00:16:25.826 18.868 - 18.963: 99.7059% ( 2) 00:16:25.826 18.963 - 19.058: 99.7360% ( 4) 00:16:25.826 19.153 - 19.247: 99.7587% ( 3) 00:16:25.826 19.247 - 19.342: 99.7738% ( 2) 00:16:25.826 19.437 - 19.532: 99.7813% ( 1) 00:16:25.826 19.532 - 19.627: 99.7888% ( 1) 00:16:25.826 19.816 - 19.911: 99.8039% ( 2) 00:16:25.826 20.196 - 20.290: 99.8115% ( 1) 00:16:25.826 22.281 - 22.376: 99.8190% ( 1) 00:16:25.826 23.135 - 23.230: 99.8265% ( 1) 00:16:25.826 23.514 - 23.609: 99.8341% ( 1) 00:16:25.826 23.988 - 24.083: 99.8416% ( 1) 00:16:25.826 25.221 - 25.410: 99.8492% ( 1) 00:16:25.826 25.600 - 25.790: 99.8567% ( 1) 00:16:25.826 26.169 - 26.359: 99.8643% ( 1) 00:16:25.826 27.876 - 28.065: 99.8718% ( 1) 00:16:25.826 28.444 - 28.634: 99.8793% ( 1) 00:16:25.826 3021.938 - 3034.074: 99.8869% ( 1) 00:16:25.826 3980.705 - 4004.978: 99.9925% ( 14) 00:16:25.826 4004.978 - 4029.250: 100.0000% ( 1) 00:16:25.826 00:16:25.826 Complete histogram 00:16:25.826 ================== 00:16:25.826 Range in us Cumulative Count 00:16:25.826 2.062 - 2.074: 0.0075% ( 1) 00:16:25.826 2.074 - 2.086: 5.9050% ( 782) 00:16:25.826 2.086 - 2.098: 29.1101% ( 3077) 00:16:25.826 2.098 - 2.110: 34.0950% ( 661) 00:16:25.826 2.110 - 2.121: 43.8612% ( 1295) 00:16:25.826 2.121 - 2.133: 57.8205% ( 1851) 00:16:25.826 2.133 - 2.145: 60.3469% ( 335) 00:16:25.826 2.145 - 2.157: 65.0377% ( 622) 00:16:25.826 2.157 - 2.169: 71.2519% ( 824) 00:16:25.826 2.169 - 2.181: 72.3831% ( 150) 00:16:25.826 2.181 - 2.193: 76.2293% ( 510) 00:16:25.826 2.193 - 2.204: 80.7089% ( 594) 00:16:25.826 2.204 - 2.216: 81.6742% ( 128) 00:16:25.826 2.216 - 2.228: 84.6606% ( 396) 00:16:25.826 2.228 - 2.240: 88.1750% ( 466) 00:16:25.826 2.240 - 2.252: 89.2081% ( 137) 00:16:25.826 2.252 - 2.264: 90.6637% ( 193) 00:16:25.826 2.264 - 2.276: 92.6018% ( 257) 00:16:25.826 2.276 - 2.287: 93.2202% ( 82) 00:16:25.826 2.287 - 2.299: 93.8989% ( 90) 00:16:25.826 2.299 - 2.311: 94.6682% ( 102) 00:16:25.826 2.311 - 2.323: 95.0302% ( 48) 00:16:25.826 2.323 - 2.335: 95.2338% ( 27) 00:16:25.826 2.335 - 2.347: 95.3544% ( 16) 00:16:25.826 2.347 - 2.359: 95.4600% ( 14) 00:16:25.826 2.359 - 2.370: 95.5807% ( 16) 00:16:25.826 2.370 - 2.382: 95.7994% ( 29) 00:16:25.826 2.382 - 2.394: 96.0784% ( 37) 00:16:25.826 2.394 - 2.406: 96.3424% ( 35) 00:16:25.826 2.406 - 2.418: 96.5611% ( 29) 00:16:25.826 2.418 - 2.430: 96.7572% ( 26) 00:16:25.826 2.430 - 2.441: 96.8854% ( 17) 00:16:25.826 2.441 - 2.453: 97.0739% ( 25) 00:16:25.826 2.453 - 2.465: 97.3002% ( 30) 00:16:25.826 2.465 - 2.477: 97.4434% ( 19) 00:16:25.826 2.477 - 2.489: 97.6018% ( 21) 00:16:25.826 2.489 - 2.501: 97.7753% ( 23) 00:16:25.826 2.501 - 2.513: 97.9035% ( 17) 00:16:25.826 2.513 - 2.524: 97.9713% ( 9) 00:16:25.826 2.524 - 2.536: 98.0392% ( 9) 00:16:25.826 2.536 - 2.548: 9[2024-05-15 15:34:38.600166] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:25.826 8.1071% ( 9) 00:16:25.826 2.548 - 2.560: 98.1222% ( 2) 00:16:25.826 2.560 - 2.572: 98.1523% ( 4) 00:16:25.826 2.572 - 2.584: 98.2051% ( 7) 00:16:25.826 2.584 - 2.596: 98.2202% ( 2) 00:16:25.826 2.596 - 2.607: 98.2353% ( 2) 00:16:25.826 2.607 - 2.619: 98.2504% ( 2) 00:16:25.826 2.619 - 2.631: 98.2730% ( 3) 00:16:25.826 2.631 - 2.643: 98.3032% ( 4) 00:16:25.826 2.643 - 2.655: 98.3107% ( 1) 00:16:25.826 2.679 - 2.690: 98.3258% ( 2) 00:16:25.826 2.690 - 2.702: 98.3333% ( 1) 00:16:25.826 2.702 - 2.714: 98.3409% ( 1) 00:16:25.826 2.714 - 2.726: 98.3484% ( 1) 00:16:25.826 2.761 - 2.773: 98.3560% ( 1) 00:16:25.826 2.785 - 2.797: 98.3635% ( 1) 00:16:25.826 2.797 - 2.809: 98.3786% ( 2) 00:16:25.826 2.868 - 2.880: 98.3861% ( 1) 00:16:25.826 2.880 - 2.892: 98.4087% ( 3) 00:16:25.826 2.963 - 2.975: 98.4163% ( 1) 00:16:25.826 2.987 - 2.999: 98.4238% ( 1) 00:16:25.826 3.461 - 3.484: 98.4389% ( 2) 00:16:25.826 3.508 - 3.532: 98.4465% ( 1) 00:16:25.826 3.532 - 3.556: 98.4615% ( 2) 00:16:25.826 3.556 - 3.579: 98.4691% ( 1) 00:16:25.826 3.579 - 3.603: 98.4766% ( 1) 00:16:25.826 3.603 - 3.627: 98.5143% ( 5) 00:16:25.826 3.721 - 3.745: 98.5219% ( 1) 00:16:25.826 3.769 - 3.793: 98.5294% ( 1) 00:16:25.826 3.864 - 3.887: 98.5370% ( 1) 00:16:25.826 3.911 - 3.935: 98.5445% ( 1) 00:16:25.826 3.935 - 3.959: 98.5520% ( 1) 00:16:25.826 4.030 - 4.053: 98.5596% ( 1) 00:16:25.826 4.267 - 4.290: 98.5671% ( 1) 00:16:25.826 4.883 - 4.907: 98.5747% ( 1) 00:16:25.826 5.049 - 5.073: 98.5822% ( 1) 00:16:25.826 5.333 - 5.357: 98.5897% ( 1) 00:16:25.826 5.713 - 5.736: 98.5973% ( 1) 00:16:25.826 6.021 - 6.044: 98.6048% ( 1) 00:16:25.826 6.044 - 6.068: 98.6199% ( 2) 00:16:25.827 6.068 - 6.116: 98.6275% ( 1) 00:16:25.827 6.400 - 6.447: 98.6350% ( 1) 00:16:25.827 6.495 - 6.542: 98.6425% ( 1) 00:16:25.827 6.732 - 6.779: 98.6576% ( 2) 00:16:25.827 6.779 - 6.827: 98.6727% ( 2) 00:16:25.827 6.969 - 7.016: 98.6802% ( 1) 00:16:25.827 7.064 - 7.111: 98.6878% ( 1) 00:16:25.827 7.490 - 7.538: 98.6953% ( 1) 00:16:25.827 8.770 - 8.818: 98.7029% ( 1) 00:16:25.827 10.145 - 10.193: 98.7104% ( 1) 00:16:25.827 14.696 - 14.791: 98.7179% ( 1) 00:16:25.827 15.455 - 15.550: 98.7330% ( 2) 00:16:25.827 15.550 - 15.644: 98.7481% ( 2) 00:16:25.827 15.739 - 15.834: 98.7783% ( 4) 00:16:25.827 15.834 - 15.929: 98.8009% ( 3) 00:16:25.827 15.929 - 16.024: 98.8462% ( 6) 00:16:25.827 16.024 - 16.119: 98.8612% ( 2) 00:16:25.827 16.119 - 16.213: 98.8989% ( 5) 00:16:25.827 16.213 - 16.308: 98.9819% ( 11) 00:16:25.827 16.308 - 16.403: 99.0121% ( 4) 00:16:25.827 16.403 - 16.498: 99.0649% ( 7) 00:16:25.827 16.498 - 16.593: 99.0950% ( 4) 00:16:25.827 16.593 - 16.687: 99.1101% ( 2) 00:16:25.827 16.687 - 16.782: 99.1176% ( 1) 00:16:25.827 16.782 - 16.877: 99.1855% ( 9) 00:16:25.827 16.877 - 16.972: 99.2459% ( 8) 00:16:25.827 16.972 - 17.067: 99.2685% ( 3) 00:16:25.827 17.161 - 17.256: 99.2836% ( 2) 00:16:25.827 17.256 - 17.351: 99.2911% ( 1) 00:16:25.827 17.351 - 17.446: 99.3062% ( 2) 00:16:25.827 17.446 - 17.541: 99.3213% ( 2) 00:16:25.827 17.541 - 17.636: 99.3288% ( 1) 00:16:25.827 17.636 - 17.730: 99.3439% ( 2) 00:16:25.827 17.825 - 17.920: 99.3514% ( 1) 00:16:25.827 18.015 - 18.110: 99.3590% ( 1) 00:16:25.827 18.204 - 18.299: 99.3665% ( 1) 00:16:25.827 18.489 - 18.584: 99.3741% ( 1) 00:16:25.827 18.963 - 19.058: 99.3891% ( 2) 00:16:25.827 20.670 - 20.764: 99.3967% ( 1) 00:16:25.827 28.444 - 28.634: 99.4042% ( 1) 00:16:25.827 2791.348 - 2803.484: 99.4118% ( 1) 00:16:25.827 3980.705 - 4004.978: 99.7738% ( 48) 00:16:25.827 4004.978 - 4029.250: 99.9925% ( 29) 00:16:25.827 5000.154 - 5024.427: 100.0000% ( 1) 00:16:25.827 00:16:25.827 15:34:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:25.827 15:34:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:25.827 15:34:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:25.827 15:34:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:25.827 15:34:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:25.827 [ 00:16:25.827 { 00:16:25.827 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:25.827 "subtype": "Discovery", 00:16:25.827 "listen_addresses": [], 00:16:25.827 "allow_any_host": true, 00:16:25.827 "hosts": [] 00:16:25.827 }, 00:16:25.827 { 00:16:25.827 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:25.827 "subtype": "NVMe", 00:16:25.827 "listen_addresses": [ 00:16:25.827 { 00:16:25.827 "trtype": "VFIOUSER", 00:16:25.827 "adrfam": "IPv4", 00:16:25.827 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:25.827 "trsvcid": "0" 00:16:25.827 } 00:16:25.827 ], 00:16:25.827 "allow_any_host": true, 00:16:25.827 "hosts": [], 00:16:25.827 "serial_number": "SPDK1", 00:16:25.827 "model_number": "SPDK bdev Controller", 00:16:25.827 "max_namespaces": 32, 00:16:25.827 "min_cntlid": 1, 00:16:25.827 "max_cntlid": 65519, 00:16:25.827 "namespaces": [ 00:16:25.827 { 00:16:25.827 "nsid": 1, 00:16:25.827 "bdev_name": "Malloc1", 00:16:25.827 "name": "Malloc1", 00:16:25.827 "nguid": "288D5F0AFB294374AD5988C6B7840E3E", 00:16:25.827 "uuid": "288d5f0a-fb29-4374-ad59-88c6b7840e3e" 00:16:25.827 }, 00:16:25.827 { 00:16:25.827 "nsid": 2, 00:16:25.827 "bdev_name": "Malloc3", 00:16:25.827 "name": "Malloc3", 00:16:25.827 "nguid": "FD442CA9537C4FA8BDB01C3BE193A3F5", 00:16:25.827 "uuid": "fd442ca9-537c-4fa8-bdb0-1c3be193a3f5" 00:16:25.827 } 00:16:25.827 ] 00:16:25.827 }, 00:16:25.827 { 00:16:25.827 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:25.827 "subtype": "NVMe", 00:16:25.827 "listen_addresses": [ 00:16:25.827 { 00:16:25.827 "trtype": "VFIOUSER", 00:16:25.827 "adrfam": "IPv4", 00:16:25.827 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:25.827 "trsvcid": "0" 00:16:25.827 } 00:16:25.827 ], 00:16:25.827 "allow_any_host": true, 00:16:25.827 "hosts": [], 00:16:25.827 "serial_number": "SPDK2", 00:16:25.827 "model_number": "SPDK bdev Controller", 00:16:25.827 "max_namespaces": 32, 00:16:25.827 "min_cntlid": 1, 00:16:25.827 "max_cntlid": 65519, 00:16:25.827 "namespaces": [ 00:16:25.827 { 00:16:25.827 "nsid": 1, 00:16:25.827 "bdev_name": "Malloc2", 00:16:25.827 "name": "Malloc2", 00:16:25.827 "nguid": "572A879ECF39486DBEB292DB3234BF60", 00:16:25.827 "uuid": "572a879e-cf39-486d-beb2-92db3234bf60" 00:16:25.827 } 00:16:25.827 ] 00:16:25.827 } 00:16:25.827 ] 00:16:25.827 15:34:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:25.827 15:34:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1278799 00:16:25.827 15:34:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:25.827 15:34:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:25.827 15:34:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:16:25.827 15:34:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:25.827 15:34:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:25.827 15:34:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:16:25.827 15:34:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:25.827 15:34:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:26.085 EAL: No free 2048 kB hugepages reported on node 1 00:16:26.085 [2024-05-15 15:34:39.064705] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:26.085 Malloc4 00:16:26.085 15:34:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:26.342 [2024-05-15 15:34:39.401128] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:26.343 15:34:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:26.343 Asynchronous Event Request test 00:16:26.343 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:26.343 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:26.343 Registering asynchronous event callbacks... 00:16:26.343 Starting namespace attribute notice tests for all controllers... 00:16:26.343 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:26.343 aer_cb - Changed Namespace 00:16:26.343 Cleaning up... 00:16:26.601 [ 00:16:26.601 { 00:16:26.601 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:26.601 "subtype": "Discovery", 00:16:26.601 "listen_addresses": [], 00:16:26.601 "allow_any_host": true, 00:16:26.601 "hosts": [] 00:16:26.601 }, 00:16:26.601 { 00:16:26.601 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:26.601 "subtype": "NVMe", 00:16:26.601 "listen_addresses": [ 00:16:26.601 { 00:16:26.601 "trtype": "VFIOUSER", 00:16:26.601 "adrfam": "IPv4", 00:16:26.601 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:26.601 "trsvcid": "0" 00:16:26.601 } 00:16:26.601 ], 00:16:26.601 "allow_any_host": true, 00:16:26.601 "hosts": [], 00:16:26.601 "serial_number": "SPDK1", 00:16:26.601 "model_number": "SPDK bdev Controller", 00:16:26.601 "max_namespaces": 32, 00:16:26.601 "min_cntlid": 1, 00:16:26.601 "max_cntlid": 65519, 00:16:26.601 "namespaces": [ 00:16:26.601 { 00:16:26.601 "nsid": 1, 00:16:26.601 "bdev_name": "Malloc1", 00:16:26.601 "name": "Malloc1", 00:16:26.601 "nguid": "288D5F0AFB294374AD5988C6B7840E3E", 00:16:26.601 "uuid": "288d5f0a-fb29-4374-ad59-88c6b7840e3e" 00:16:26.601 }, 00:16:26.601 { 00:16:26.601 "nsid": 2, 00:16:26.601 "bdev_name": "Malloc3", 00:16:26.601 "name": "Malloc3", 00:16:26.601 "nguid": "FD442CA9537C4FA8BDB01C3BE193A3F5", 00:16:26.601 "uuid": "fd442ca9-537c-4fa8-bdb0-1c3be193a3f5" 00:16:26.601 } 00:16:26.601 ] 00:16:26.601 }, 00:16:26.601 { 00:16:26.601 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:26.601 "subtype": "NVMe", 00:16:26.601 "listen_addresses": [ 00:16:26.601 { 00:16:26.601 "trtype": "VFIOUSER", 00:16:26.601 "adrfam": "IPv4", 00:16:26.601 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:26.601 "trsvcid": "0" 00:16:26.601 } 00:16:26.601 ], 00:16:26.601 "allow_any_host": true, 00:16:26.601 "hosts": [], 00:16:26.601 "serial_number": "SPDK2", 00:16:26.601 "model_number": "SPDK bdev Controller", 00:16:26.601 "max_namespaces": 32, 00:16:26.601 "min_cntlid": 1, 00:16:26.601 "max_cntlid": 65519, 00:16:26.601 "namespaces": [ 00:16:26.601 { 00:16:26.601 "nsid": 1, 00:16:26.601 "bdev_name": "Malloc2", 00:16:26.601 "name": "Malloc2", 00:16:26.601 "nguid": "572A879ECF39486DBEB292DB3234BF60", 00:16:26.601 "uuid": "572a879e-cf39-486d-beb2-92db3234bf60" 00:16:26.601 }, 00:16:26.601 { 00:16:26.601 "nsid": 2, 00:16:26.601 "bdev_name": "Malloc4", 00:16:26.601 "name": "Malloc4", 00:16:26.601 "nguid": "5FD27AA36DF94C24B60F6AA56C7CCE33", 00:16:26.601 "uuid": "5fd27aa3-6df9-4c24-b60f-6aa56c7cce33" 00:16:26.601 } 00:16:26.601 ] 00:16:26.601 } 00:16:26.601 ] 00:16:26.601 15:34:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1278799 00:16:26.601 15:34:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:26.601 15:34:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1273197 00:16:26.601 15:34:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 1273197 ']' 00:16:26.601 15:34:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 1273197 00:16:26.601 15:34:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:16:26.601 15:34:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:26.601 15:34:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1273197 00:16:26.601 15:34:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:26.601 15:34:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:26.601 15:34:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1273197' 00:16:26.601 killing process with pid 1273197 00:16:26.601 15:34:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 1273197 00:16:26.601 [2024-05-15 15:34:39.691949] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:26.601 15:34:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 1273197 00:16:27.166 15:34:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:27.166 15:34:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:27.166 15:34:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:27.166 15:34:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:27.166 15:34:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:27.166 15:34:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1278945 00:16:27.166 15:34:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1278945' 00:16:27.166 Process pid: 1278945 00:16:27.166 15:34:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:27.166 15:34:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:27.166 15:34:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1278945 00:16:27.166 15:34:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 1278945 ']' 00:16:27.166 15:34:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.166 15:34:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:27.166 15:34:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.166 15:34:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:27.166 15:34:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:27.166 [2024-05-15 15:34:40.082364] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:27.166 [2024-05-15 15:34:40.083611] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:16:27.166 [2024-05-15 15:34:40.083679] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.166 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.166 [2024-05-15 15:34:40.126931] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:27.166 [2024-05-15 15:34:40.163853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:27.166 [2024-05-15 15:34:40.249865] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.166 [2024-05-15 15:34:40.249914] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.166 [2024-05-15 15:34:40.249944] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.166 [2024-05-15 15:34:40.249963] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.166 [2024-05-15 15:34:40.249974] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.166 [2024-05-15 15:34:40.250025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.166 [2024-05-15 15:34:40.250087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:27.166 [2024-05-15 15:34:40.250152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:27.166 [2024-05-15 15:34:40.250154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.424 [2024-05-15 15:34:40.343324] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:27.424 [2024-05-15 15:34:40.343553] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:27.424 [2024-05-15 15:34:40.343829] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:27.424 [2024-05-15 15:34:40.344489] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:27.424 [2024-05-15 15:34:40.344731] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:27.424 15:34:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:27.424 15:34:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:16:27.424 15:34:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:28.356 15:34:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:28.614 15:34:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:28.614 15:34:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:28.614 15:34:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:28.614 15:34:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:28.614 15:34:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:28.872 Malloc1 00:16:28.872 15:34:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:29.130 15:34:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:29.387 15:34:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:29.645 [2024-05-15 15:34:42.614775] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:29.645 15:34:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:29.645 15:34:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:29.645 15:34:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:29.902 Malloc2 00:16:29.903 15:34:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:30.161 15:34:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:30.418 15:34:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:30.675 15:34:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:30.675 15:34:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1278945 00:16:30.675 15:34:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 1278945 ']' 00:16:30.675 15:34:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 1278945 00:16:30.675 15:34:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:16:30.675 15:34:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:30.675 15:34:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1278945 00:16:30.675 15:34:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:30.675 15:34:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:30.675 15:34:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1278945' 00:16:30.675 killing process with pid 1278945 00:16:30.675 15:34:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 1278945 00:16:30.675 [2024-05-15 15:34:43.661684] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:30.675 15:34:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 1278945 00:16:30.933 15:34:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:30.933 15:34:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:30.933 00:16:30.933 real 0m52.685s 00:16:30.933 user 3m28.016s 00:16:30.933 sys 0m4.326s 00:16:30.933 15:34:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:30.933 15:34:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:30.933 ************************************ 00:16:30.933 END TEST nvmf_vfio_user 00:16:30.933 ************************************ 00:16:30.933 15:34:43 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:30.933 15:34:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:30.933 15:34:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:30.933 15:34:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:30.933 ************************************ 00:16:30.933 START TEST nvmf_vfio_user_nvme_compliance 00:16:30.933 ************************************ 00:16:30.933 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:31.191 * Looking for test storage... 00:16:31.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.191 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1279432 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1279432' 00:16:31.192 Process pid: 1279432 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1279432 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 1279432 ']' 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:31.192 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:31.192 [2024-05-15 15:34:44.128680] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:16:31.192 [2024-05-15 15:34:44.128785] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.192 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.192 [2024-05-15 15:34:44.167154] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:31.192 [2024-05-15 15:34:44.197883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:31.192 [2024-05-15 15:34:44.278000] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.192 [2024-05-15 15:34:44.278055] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.192 [2024-05-15 15:34:44.278083] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.192 [2024-05-15 15:34:44.278094] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.192 [2024-05-15 15:34:44.278104] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.192 [2024-05-15 15:34:44.278168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.192 [2024-05-15 15:34:44.278266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.192 [2024-05-15 15:34:44.278269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.449 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:31.449 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:16:31.449 15:34:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:32.382 15:34:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:32.382 15:34:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:32.382 15:34:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:32.382 15:34:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.382 15:34:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.382 15:34:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.382 15:34:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:32.382 15:34:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:32.382 15:34:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.382 15:34:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.382 malloc0 00:16:32.382 15:34:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.382 15:34:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:32.382 15:34:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.382 15:34:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.382 15:34:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.382 15:34:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:32.382 15:34:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.382 15:34:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.382 15:34:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.382 15:34:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:32.382 15:34:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.382 15:34:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.382 [2024-05-15 15:34:45.472058] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:32.382 15:34:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.382 15:34:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:32.639 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.639 00:16:32.639 00:16:32.639 CUnit - A unit testing framework for C - Version 2.1-3 00:16:32.639 http://cunit.sourceforge.net/ 00:16:32.639 00:16:32.639 00:16:32.639 Suite: nvme_compliance 00:16:32.639 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-15 15:34:45.650734] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:32.639 [2024-05-15 15:34:45.652175] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:32.639 [2024-05-15 15:34:45.652213] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:32.639 [2024-05-15 15:34:45.652240] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:32.639 [2024-05-15 15:34:45.653751] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:32.639 passed 00:16:32.896 Test: admin_identify_ctrlr_verify_fused ...[2024-05-15 15:34:45.743347] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:32.896 [2024-05-15 15:34:45.746359] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:32.896 passed 00:16:32.896 Test: admin_identify_ns ...[2024-05-15 15:34:45.832299] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:32.896 [2024-05-15 15:34:45.895234] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:32.896 [2024-05-15 15:34:45.903246] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:32.896 [2024-05-15 15:34:45.924352] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:32.896 passed 00:16:33.153 Test: admin_get_features_mandatory_features ...[2024-05-15 15:34:46.007902] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.153 [2024-05-15 15:34:46.010923] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.153 passed 00:16:33.153 Test: admin_get_features_optional_features ...[2024-05-15 15:34:46.092455] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.153 [2024-05-15 15:34:46.095478] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.153 passed 00:16:33.153 Test: admin_set_features_number_of_queues ...[2024-05-15 15:34:46.181356] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.410 [2024-05-15 15:34:46.286328] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.410 passed 00:16:33.410 Test: admin_get_log_page_mandatory_logs ...[2024-05-15 15:34:46.371570] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.410 [2024-05-15 15:34:46.374596] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.410 passed 00:16:33.410 Test: admin_get_log_page_with_lpo ...[2024-05-15 15:34:46.455737] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.667 [2024-05-15 15:34:46.524234] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:33.667 [2024-05-15 15:34:46.534986] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.667 passed 00:16:33.667 Test: fabric_property_get ...[2024-05-15 15:34:46.618815] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.667 [2024-05-15 15:34:46.620111] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:33.667 [2024-05-15 15:34:46.621838] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.668 passed 00:16:33.668 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-15 15:34:46.704362] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.668 [2024-05-15 15:34:46.705640] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:33.668 [2024-05-15 15:34:46.707386] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.668 passed 00:16:33.926 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-15 15:34:46.796440] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.926 [2024-05-15 15:34:46.880225] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:33.926 [2024-05-15 15:34:46.896225] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:33.926 [2024-05-15 15:34:46.901341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.926 passed 00:16:33.926 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-15 15:34:46.984478] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.926 [2024-05-15 15:34:46.985754] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:33.926 [2024-05-15 15:34:46.987503] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.926 passed 00:16:34.183 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-15 15:34:47.072023] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.183 [2024-05-15 15:34:47.147226] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:34.183 [2024-05-15 15:34:47.171227] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:34.183 [2024-05-15 15:34:47.176338] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.183 passed 00:16:34.183 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-15 15:34:47.264624] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.183 [2024-05-15 15:34:47.265868] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:34.183 [2024-05-15 15:34:47.265918] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:34.183 [2024-05-15 15:34:47.267641] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.440 passed 00:16:34.440 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-15 15:34:47.353711] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.440 [2024-05-15 15:34:47.445227] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:34.440 [2024-05-15 15:34:47.453228] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:34.440 [2024-05-15 15:34:47.461226] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:34.440 [2024-05-15 15:34:47.469242] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:34.440 [2024-05-15 15:34:47.498352] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.440 passed 00:16:34.697 Test: admin_create_io_sq_verify_pc ...[2024-05-15 15:34:47.586620] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.697 [2024-05-15 15:34:47.602238] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:34.697 [2024-05-15 15:34:47.619905] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.697 passed 00:16:34.697 Test: admin_create_io_qp_max_qps ...[2024-05-15 15:34:47.701446] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:36.067 [2024-05-15 15:34:48.791233] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:36.067 [2024-05-15 15:34:49.166360] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:36.324 passed 00:16:36.324 Test: admin_create_io_sq_shared_cq ...[2024-05-15 15:34:49.251627] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:36.324 [2024-05-15 15:34:49.383223] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:36.324 [2024-05-15 15:34:49.420298] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:36.582 passed 00:16:36.582 00:16:36.582 Run Summary: Type Total Ran Passed Failed Inactive 00:16:36.582 suites 1 1 n/a 0 0 00:16:36.582 tests 18 18 18 0 0 00:16:36.582 asserts 360 360 360 0 n/a 00:16:36.582 00:16:36.582 Elapsed time = 1.561 seconds 00:16:36.582 15:34:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1279432 00:16:36.582 15:34:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 1279432 ']' 00:16:36.582 15:34:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 1279432 00:16:36.582 15:34:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:16:36.582 15:34:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:36.582 15:34:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1279432 00:16:36.582 15:34:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:36.582 15:34:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:36.582 15:34:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1279432' 00:16:36.582 killing process with pid 1279432 00:16:36.582 15:34:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 1279432 00:16:36.582 [2024-05-15 15:34:49.495564] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:36.582 15:34:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 1279432 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:36.841 00:16:36.841 real 0m5.723s 00:16:36.841 user 0m16.119s 00:16:36.841 sys 0m0.549s 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:36.841 ************************************ 00:16:36.841 END TEST nvmf_vfio_user_nvme_compliance 00:16:36.841 ************************************ 00:16:36.841 15:34:49 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:36.841 15:34:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:36.841 15:34:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:36.841 15:34:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:36.841 ************************************ 00:16:36.841 START TEST nvmf_vfio_user_fuzz 00:16:36.841 ************************************ 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:36.841 * Looking for test storage... 00:16:36.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1280253 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1280253' 00:16:36.841 Process pid: 1280253 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1280253 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 1280253 ']' 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:36.841 15:34:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:37.137 15:34:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:37.137 15:34:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:16:37.137 15:34:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:38.507 15:34:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:38.507 15:34:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.507 15:34:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.507 15:34:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.507 15:34:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:38.507 15:34:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:38.507 15:34:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.507 15:34:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.507 malloc0 00:16:38.507 15:34:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.507 15:34:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:38.507 15:34:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.507 15:34:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.507 15:34:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.507 15:34:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:38.507 15:34:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.507 15:34:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.507 15:34:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.507 15:34:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:38.507 15:34:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.507 15:34:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.507 15:34:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.507 15:34:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:38.507 15:34:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:10.562 Fuzzing completed. Shutting down the fuzz application 00:17:10.562 00:17:10.562 Dumping successful admin opcodes: 00:17:10.562 8, 9, 10, 24, 00:17:10.563 Dumping successful io opcodes: 00:17:10.563 0, 00:17:10.563 NS: 0x200003a1ef00 I/O qp, Total commands completed: 568900, total successful commands: 2189, random_seed: 1965525824 00:17:10.563 NS: 0x200003a1ef00 admin qp, Total commands completed: 72504, total successful commands: 572, random_seed: 1992493504 00:17:10.563 15:35:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:10.563 15:35:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.563 15:35:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:10.563 15:35:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.563 15:35:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1280253 00:17:10.563 15:35:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 1280253 ']' 00:17:10.563 15:35:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 1280253 00:17:10.563 15:35:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:17:10.563 15:35:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:10.563 15:35:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1280253 00:17:10.563 15:35:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:10.563 15:35:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:10.563 15:35:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1280253' 00:17:10.563 killing process with pid 1280253 00:17:10.563 15:35:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 1280253 00:17:10.563 15:35:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 1280253 00:17:10.563 15:35:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:10.563 15:35:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:10.563 00:17:10.563 real 0m32.254s 00:17:10.563 user 0m31.586s 00:17:10.563 sys 0m28.466s 00:17:10.563 15:35:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:10.563 15:35:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:10.563 ************************************ 00:17:10.563 END TEST nvmf_vfio_user_fuzz 00:17:10.563 ************************************ 00:17:10.563 15:35:22 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:10.563 15:35:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:10.563 15:35:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:10.563 15:35:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:10.563 ************************************ 00:17:10.563 START TEST nvmf_host_management 00:17:10.563 ************************************ 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:10.563 * Looking for test storage... 00:17:10.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:17:10.563 15:35:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:11.936 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.936 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:11.937 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:11.937 Found net devices under 0000:09:00.0: cvl_0_0 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:11.937 Found net devices under 0000:09:00.1: cvl_0_1 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:11.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:11.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:17:11.937 00:17:11.937 --- 10.0.0.2 ping statistics --- 00:17:11.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.937 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:11.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:11.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:17:11.937 00:17:11.937 --- 10.0.0.1 ping statistics --- 00:17:11.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.937 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1285994 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1285994 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 1285994 ']' 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:11.937 15:35:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:11.937 [2024-05-15 15:35:24.934262] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:17:11.937 [2024-05-15 15:35:24.934350] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.937 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.937 [2024-05-15 15:35:24.979913] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:11.937 [2024-05-15 15:35:25.016097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:12.195 [2024-05-15 15:35:25.107418] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.195 [2024-05-15 15:35:25.107480] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.195 [2024-05-15 15:35:25.107506] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.195 [2024-05-15 15:35:25.107521] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.195 [2024-05-15 15:35:25.107533] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.195 [2024-05-15 15:35:25.107622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.195 [2024-05-15 15:35:25.107665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:12.195 [2024-05-15 15:35:25.107805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:12.195 [2024-05-15 15:35:25.107807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.195 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:12.195 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:17:12.195 15:35:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:12.195 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:12.195 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:12.195 15:35:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.195 15:35:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:12.195 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.195 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:12.195 [2024-05-15 15:35:25.274978] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:12.195 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.195 15:35:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:12.195 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:12.195 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:12.195 15:35:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:12.195 15:35:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:17:12.195 15:35:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:17:12.195 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.195 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:12.454 Malloc0 00:17:12.454 [2024-05-15 15:35:25.333840] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:12.454 [2024-05-15 15:35:25.334127] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.454 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.454 15:35:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:12.454 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:12.454 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:12.454 15:35:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1286044 00:17:12.454 15:35:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1286044 /var/tmp/bdevperf.sock 00:17:12.454 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 1286044 ']' 00:17:12.454 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:12.454 15:35:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:12.454 15:35:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:12.454 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:12.454 15:35:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:12.454 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:12.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:12.454 15:35:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:12.454 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:12.454 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:12.454 15:35:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:12.454 15:35:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:12.454 { 00:17:12.454 "params": { 00:17:12.454 "name": "Nvme$subsystem", 00:17:12.454 "trtype": "$TEST_TRANSPORT", 00:17:12.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:12.454 "adrfam": "ipv4", 00:17:12.454 "trsvcid": "$NVMF_PORT", 00:17:12.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:12.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:12.454 "hdgst": ${hdgst:-false}, 00:17:12.454 "ddgst": ${ddgst:-false} 00:17:12.454 }, 00:17:12.454 "method": "bdev_nvme_attach_controller" 00:17:12.454 } 00:17:12.454 EOF 00:17:12.454 )") 00:17:12.454 15:35:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:12.454 15:35:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:12.454 15:35:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:12.454 15:35:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:12.454 "params": { 00:17:12.454 "name": "Nvme0", 00:17:12.454 "trtype": "tcp", 00:17:12.454 "traddr": "10.0.0.2", 00:17:12.454 "adrfam": "ipv4", 00:17:12.454 "trsvcid": "4420", 00:17:12.454 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:12.454 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:12.454 "hdgst": false, 00:17:12.454 "ddgst": false 00:17:12.454 }, 00:17:12.454 "method": "bdev_nvme_attach_controller" 00:17:12.454 }' 00:17:12.454 [2024-05-15 15:35:25.404827] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:17:12.454 [2024-05-15 15:35:25.404917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1286044 ] 00:17:12.454 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.454 [2024-05-15 15:35:25.443691] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:12.454 [2024-05-15 15:35:25.479256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.712 [2024-05-15 15:35:25.562734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.712 Running I/O for 10 seconds... 00:17:12.712 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:12.712 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:17:12.712 15:35:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:12.712 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.712 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:12.712 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.712 15:35:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:12.712 15:35:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:12.712 15:35:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:12.712 15:35:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:12.712 15:35:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:17:12.712 15:35:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:17:12.712 15:35:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:12.712 15:35:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:12.712 15:35:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:12.712 15:35:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:12.712 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.712 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:12.712 15:35:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.969 15:35:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=65 00:17:12.969 15:35:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 65 -ge 100 ']' 00:17:12.969 15:35:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:17:13.227 15:35:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:17:13.227 15:35:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:13.227 15:35:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:13.227 15:35:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:13.227 15:35:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.227 15:35:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:13.227 15:35:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.227 15:35:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=520 00:17:13.227 15:35:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 520 -ge 100 ']' 00:17:13.227 15:35:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:17:13.227 15:35:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:17:13.227 15:35:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:17:13.227 15:35:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:13.227 15:35:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.227 15:35:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:13.227 [2024-05-15 15:35:26.125150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.227 [2024-05-15 15:35:26.125207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.227 [2024-05-15 15:35:26.125244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.227 [2024-05-15 15:35:26.125273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.227 [2024-05-15 15:35:26.125291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.227 [2024-05-15 15:35:26.125306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.227 [2024-05-15 15:35:26.125322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.227 [2024-05-15 15:35:26.125337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.227 [2024-05-15 15:35:26.125353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.227 [2024-05-15 15:35:26.125368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.227 [2024-05-15 15:35:26.125383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.227 [2024-05-15 15:35:26.125398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.227 [2024-05-15 15:35:26.125414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.125429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.125445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.125459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.125475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.125490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.125515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.125538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.125555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.125569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.125585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.125600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.125616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.125630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.125646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.125660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.125676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.125691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.125706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.125721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.125737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.125751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.125767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.125782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.125797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.125812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.125827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.125842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.125857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.125871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.125887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.125901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.125921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.125936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.125951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.125966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.125981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.125996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.126011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.126026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.126041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.126055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.126071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.126085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.126101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.126116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.126132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.126147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.126162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.126177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.126193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.126208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.126232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.126259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.126275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.126289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.126305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.126323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.126339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.126354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.126375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.126391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.126407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.126421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.126437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.126452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.126467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.126482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.126498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.126518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.126534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.126548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.126564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.126579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.126594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.126609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.126626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.126640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.126656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.228 [2024-05-15 15:35:26.126671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.228 [2024-05-15 15:35:26.126687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.229 [2024-05-15 15:35:26.126701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.229 [2024-05-15 15:35:26.126721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.229 [2024-05-15 15:35:26.126736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.229 [2024-05-15 15:35:26.126752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.229 [2024-05-15 15:35:26.126766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.229 [2024-05-15 15:35:26.126781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.229 [2024-05-15 15:35:26.126796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.229 [2024-05-15 15:35:26.126811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.229 [2024-05-15 15:35:26.126826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.229 [2024-05-15 15:35:26.126842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.229 [2024-05-15 15:35:26.126856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.229 [2024-05-15 15:35:26.126872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.229 [2024-05-15 15:35:26.126886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.229 [2024-05-15 15:35:26.126903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.229 [2024-05-15 15:35:26.126917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.229 [2024-05-15 15:35:26.126933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.229 [2024-05-15 15:35:26.126947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.229 [2024-05-15 15:35:26.126962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.229 [2024-05-15 15:35:26.126977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.229 [2024-05-15 15:35:26.126993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.229 [2024-05-15 15:35:26.127007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.229 [2024-05-15 15:35:26.127023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.229 [2024-05-15 15:35:26.127037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.229 [2024-05-15 15:35:26.127053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.229 [2024-05-15 15:35:26.127067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.229 [2024-05-15 15:35:26.127083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.229 [2024-05-15 15:35:26.127101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.229 [2024-05-15 15:35:26.127118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.229 [2024-05-15 15:35:26.127134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.229 [2024-05-15 15:35:26.127150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.229 [2024-05-15 15:35:26.127165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.229 [2024-05-15 15:35:26.127181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.229 [2024-05-15 15:35:26.127195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.229 [2024-05-15 15:35:26.127211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.229 [2024-05-15 15:35:26.127237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.229 [2024-05-15 15:35:26.127334] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2830430 was disconnected and freed. reset controller. 00:17:13.229 [2024-05-15 15:35:26.128454] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:13.229 15:35:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.229 15:35:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:13.229 15:35:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.229 15:35:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:13.229 task offset: 76288 on job bdev=Nvme0n1 fails 00:17:13.229 00:17:13.229 Latency(us) 00:17:13.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.229 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:13.229 Job: Nvme0n1 ended in about 0.39 seconds with error 00:17:13.229 Verification LBA range: start 0x0 length 0x400 00:17:13.229 Nvme0n1 : 0.39 1489.66 93.10 164.09 0.00 37577.39 2621.44 36117.62 00:17:13.229 =================================================================================================================== 00:17:13.229 Total : 1489.66 93.10 164.09 0.00 37577.39 2621.44 36117.62 00:17:13.229 [2024-05-15 15:35:26.130359] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:13.229 [2024-05-15 15:35:26.130389] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241f540 (9): Bad file descriptor 00:17:13.229 15:35:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.229 15:35:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:17:13.229 [2024-05-15 15:35:26.264378] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:14.161 15:35:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1286044 00:17:14.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1286044) - No such process 00:17:14.161 15:35:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:17:14.161 15:35:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:14.161 15:35:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:14.161 15:35:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:14.161 15:35:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:14.161 15:35:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:14.161 15:35:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:14.161 15:35:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:14.161 { 00:17:14.161 "params": { 00:17:14.161 "name": "Nvme$subsystem", 00:17:14.161 "trtype": "$TEST_TRANSPORT", 00:17:14.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:14.161 "adrfam": "ipv4", 00:17:14.161 "trsvcid": "$NVMF_PORT", 00:17:14.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:14.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:14.161 "hdgst": ${hdgst:-false}, 00:17:14.161 "ddgst": ${ddgst:-false} 00:17:14.161 }, 00:17:14.161 "method": "bdev_nvme_attach_controller" 00:17:14.161 } 00:17:14.161 EOF 00:17:14.161 )") 00:17:14.161 15:35:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:14.161 15:35:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:14.161 15:35:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:14.161 15:35:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:14.161 "params": { 00:17:14.161 "name": "Nvme0", 00:17:14.161 "trtype": "tcp", 00:17:14.161 "traddr": "10.0.0.2", 00:17:14.161 "adrfam": "ipv4", 00:17:14.161 "trsvcid": "4420", 00:17:14.161 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:14.161 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:14.161 "hdgst": false, 00:17:14.161 "ddgst": false 00:17:14.161 }, 00:17:14.161 "method": "bdev_nvme_attach_controller" 00:17:14.161 }' 00:17:14.161 [2024-05-15 15:35:27.179653] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:17:14.161 [2024-05-15 15:35:27.179728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1286316 ] 00:17:14.161 EAL: No free 2048 kB hugepages reported on node 1 00:17:14.161 [2024-05-15 15:35:27.218171] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:14.161 [2024-05-15 15:35:27.252549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.418 [2024-05-15 15:35:27.336972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.675 Running I/O for 1 seconds... 00:17:15.608 00:17:15.608 Latency(us) 00:17:15.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.608 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:15.608 Verification LBA range: start 0x0 length 0x400 00:17:15.608 Nvme0n1 : 1.02 1443.57 90.22 0.00 0.00 43592.13 7864.32 33787.45 00:17:15.608 =================================================================================================================== 00:17:15.608 Total : 1443.57 90.22 0.00 0.00 43592.13 7864.32 33787.45 00:17:15.866 15:35:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:17:15.866 15:35:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:15.866 15:35:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:15.866 15:35:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:15.866 15:35:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:17:15.866 15:35:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:15.866 15:35:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:17:15.866 15:35:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:15.866 15:35:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:17:15.866 15:35:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:15.866 15:35:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:15.866 rmmod nvme_tcp 00:17:15.866 rmmod nvme_fabrics 00:17:15.866 rmmod nvme_keyring 00:17:15.866 15:35:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:15.866 15:35:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:17:15.866 15:35:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:17:15.866 15:35:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1285994 ']' 00:17:15.867 15:35:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1285994 00:17:15.867 15:35:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 1285994 ']' 00:17:15.867 15:35:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 1285994 00:17:15.867 15:35:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:17:15.867 15:35:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:15.867 15:35:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1285994 00:17:15.867 15:35:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:15.867 15:35:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:15.867 15:35:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1285994' 00:17:15.867 killing process with pid 1285994 00:17:15.867 15:35:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 1285994 00:17:15.867 [2024-05-15 15:35:28.968376] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:15.867 15:35:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 1285994 00:17:16.126 [2024-05-15 15:35:29.192673] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:16.126 15:35:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:16.126 15:35:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:16.385 15:35:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:16.385 15:35:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:16.385 15:35:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:16.385 15:35:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.385 15:35:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.385 15:35:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.289 15:35:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:18.289 15:35:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:17:18.289 00:17:18.289 real 0m9.171s 00:17:18.289 user 0m19.595s 00:17:18.289 sys 0m3.028s 00:17:18.289 15:35:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:18.289 15:35:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:18.289 ************************************ 00:17:18.289 END TEST nvmf_host_management 00:17:18.289 ************************************ 00:17:18.289 15:35:31 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:18.289 15:35:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:18.289 15:35:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:18.289 15:35:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:18.289 ************************************ 00:17:18.289 START TEST nvmf_lvol 00:17:18.289 ************************************ 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:18.289 * Looking for test storage... 00:17:18.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.289 15:35:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.548 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:18.548 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:18.548 15:35:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:17:18.548 15:35:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:21.075 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:21.076 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:21.076 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:21.076 Found net devices under 0000:09:00.0: cvl_0_0 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:21.076 Found net devices under 0000:09:00.1: cvl_0_1 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:21.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:21.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:17:21.076 00:17:21.076 --- 10.0.0.2 ping statistics --- 00:17:21.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.076 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:21.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:21.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:17:21.076 00:17:21.076 --- 10.0.0.1 ping statistics --- 00:17:21.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.076 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1288805 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1288805 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 1288805 ']' 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:21.076 15:35:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:21.076 [2024-05-15 15:35:33.900137] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:17:21.076 [2024-05-15 15:35:33.900222] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.076 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.076 [2024-05-15 15:35:33.942872] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:21.076 [2024-05-15 15:35:33.980308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:21.076 [2024-05-15 15:35:34.068030] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.076 [2024-05-15 15:35:34.068103] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.076 [2024-05-15 15:35:34.068121] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:21.076 [2024-05-15 15:35:34.068135] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:21.076 [2024-05-15 15:35:34.068146] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.076 [2024-05-15 15:35:34.068237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.076 [2024-05-15 15:35:34.068282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.076 [2024-05-15 15:35:34.068285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.335 15:35:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:21.335 15:35:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:17:21.335 15:35:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:21.335 15:35:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:21.335 15:35:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:21.335 15:35:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.335 15:35:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:21.335 [2024-05-15 15:35:34.429322] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.626 15:35:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:21.883 15:35:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:21.883 15:35:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:22.140 15:35:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:22.140 15:35:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:22.140 15:35:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:22.398 15:35:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0f66c595-d145-44d2-844e-dc86ed44c5a0 00:17:22.398 15:35:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0f66c595-d145-44d2-844e-dc86ed44c5a0 lvol 20 00:17:22.655 15:35:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=13e62b52-d521-45a0-b221-5f9133307fea 00:17:22.655 15:35:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:22.913 15:35:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 13e62b52-d521-45a0-b221-5f9133307fea 00:17:23.170 15:35:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:23.427 [2024-05-15 15:35:36.472090] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:23.427 [2024-05-15 15:35:36.472395] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.427 15:35:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:23.684 15:35:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1289118 00:17:23.684 15:35:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:23.684 15:35:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:23.684 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.054 15:35:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 13e62b52-d521-45a0-b221-5f9133307fea MY_SNAPSHOT 00:17:25.054 15:35:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1186631a-7741-4910-b672-204eaa540b50 00:17:25.054 15:35:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 13e62b52-d521-45a0-b221-5f9133307fea 30 00:17:25.311 15:35:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1186631a-7741-4910-b672-204eaa540b50 MY_CLONE 00:17:25.569 15:35:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f7d1e6e8-18f6-4e5b-8a19-02549df3a394 00:17:25.569 15:35:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f7d1e6e8-18f6-4e5b-8a19-02549df3a394 00:17:26.134 15:35:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1289118 00:17:34.236 Initializing NVMe Controllers 00:17:34.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:34.237 Controller IO queue size 128, less than required. 00:17:34.237 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:34.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:34.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:34.237 Initialization complete. Launching workers. 00:17:34.237 ======================================================== 00:17:34.237 Latency(us) 00:17:34.237 Device Information : IOPS MiB/s Average min max 00:17:34.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10614.94 41.46 12062.07 1943.16 120381.12 00:17:34.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10423.54 40.72 12279.96 2026.56 53695.25 00:17:34.237 ======================================================== 00:17:34.237 Total : 21038.48 82.18 12170.03 1943.16 120381.12 00:17:34.237 00:17:34.237 15:35:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:34.495 15:35:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 13e62b52-d521-45a0-b221-5f9133307fea 00:17:34.752 15:35:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0f66c595-d145-44d2-844e-dc86ed44c5a0 00:17:35.010 15:35:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:35.010 15:35:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:35.010 15:35:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:35.010 15:35:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:35.010 15:35:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:35.010 15:35:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:35.010 15:35:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:35.010 15:35:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:35.010 15:35:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:35.010 rmmod nvme_tcp 00:17:35.010 rmmod nvme_fabrics 00:17:35.010 rmmod nvme_keyring 00:17:35.010 15:35:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:35.268 15:35:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:35.268 15:35:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:35.268 15:35:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1288805 ']' 00:17:35.268 15:35:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1288805 00:17:35.268 15:35:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 1288805 ']' 00:17:35.268 15:35:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 1288805 00:17:35.268 15:35:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:17:35.268 15:35:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:35.268 15:35:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1288805 00:17:35.268 15:35:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:35.268 15:35:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:35.268 15:35:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1288805' 00:17:35.268 killing process with pid 1288805 00:17:35.268 15:35:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 1288805 00:17:35.268 [2024-05-15 15:35:48.144695] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:35.268 15:35:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 1288805 00:17:35.526 15:35:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:35.526 15:35:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:35.526 15:35:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:35.526 15:35:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:35.526 15:35:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:35.526 15:35:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.526 15:35:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.526 15:35:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.428 15:35:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:37.428 00:17:37.428 real 0m19.143s 00:17:37.428 user 1m4.299s 00:17:37.428 sys 0m5.780s 00:17:37.428 15:35:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:37.428 15:35:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:37.428 ************************************ 00:17:37.428 END TEST nvmf_lvol 00:17:37.428 ************************************ 00:17:37.428 15:35:50 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:37.428 15:35:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:37.428 15:35:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:37.428 15:35:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:37.428 ************************************ 00:17:37.428 START TEST nvmf_lvs_grow 00:17:37.428 ************************************ 00:17:37.428 15:35:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:37.687 * Looking for test storage... 00:17:37.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:37.687 15:35:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:40.217 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:40.217 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:40.217 Found net devices under 0000:09:00.0: cvl_0_0 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:40.217 Found net devices under 0000:09:00.1: cvl_0_1 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:40.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:17:40.217 00:17:40.217 --- 10.0.0.2 ping statistics --- 00:17:40.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.217 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:17:40.217 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:40.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:17:40.217 00:17:40.218 --- 10.0.0.1 ping statistics --- 00:17:40.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.218 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:17:40.218 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.218 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:40.218 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:40.218 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.218 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:40.218 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:40.218 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.218 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:40.218 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:40.218 15:35:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:40.218 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:40.218 15:35:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:40.218 15:35:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:40.218 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1292781 00:17:40.218 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:40.218 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1292781 00:17:40.218 15:35:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 1292781 ']' 00:17:40.218 15:35:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.218 15:35:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:40.218 15:35:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.218 15:35:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:40.218 15:35:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:40.218 [2024-05-15 15:35:53.226115] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:17:40.218 [2024-05-15 15:35:53.226202] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.218 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.218 [2024-05-15 15:35:53.269030] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:40.218 [2024-05-15 15:35:53.299884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.476 [2024-05-15 15:35:53.381253] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.476 [2024-05-15 15:35:53.381320] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.476 [2024-05-15 15:35:53.381348] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.476 [2024-05-15 15:35:53.381360] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.476 [2024-05-15 15:35:53.381370] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.476 [2024-05-15 15:35:53.381396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.476 15:35:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:40.476 15:35:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:17:40.476 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:40.476 15:35:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:40.476 15:35:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:40.476 15:35:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.476 15:35:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:40.734 [2024-05-15 15:35:53.794777] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.734 15:35:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:40.734 15:35:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:40.734 15:35:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:40.734 15:35:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:40.991 ************************************ 00:17:40.991 START TEST lvs_grow_clean 00:17:40.991 ************************************ 00:17:40.991 15:35:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:17:40.991 15:35:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:40.991 15:35:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:40.991 15:35:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:40.991 15:35:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:40.991 15:35:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:40.991 15:35:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:40.991 15:35:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:40.991 15:35:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:40.991 15:35:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:41.249 15:35:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:41.249 15:35:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:41.507 15:35:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=9aa61367-ba82-4c33-b081-f60d0d872bdf 00:17:41.507 15:35:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9aa61367-ba82-4c33-b081-f60d0d872bdf 00:17:41.507 15:35:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:41.765 15:35:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:41.765 15:35:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:41.765 15:35:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9aa61367-ba82-4c33-b081-f60d0d872bdf lvol 150 00:17:42.023 15:35:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=69b48c1e-f668-4515-87d9-b03b1d7eba47 00:17:42.023 15:35:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:42.023 15:35:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:42.281 [2024-05-15 15:35:55.196516] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:42.281 [2024-05-15 15:35:55.196606] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:42.281 true 00:17:42.281 15:35:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9aa61367-ba82-4c33-b081-f60d0d872bdf 00:17:42.281 15:35:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:42.539 15:35:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:42.539 15:35:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:42.796 15:35:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 69b48c1e-f668-4515-87d9-b03b1d7eba47 00:17:43.054 15:35:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:43.311 [2024-05-15 15:35:56.227438] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:43.311 [2024-05-15 15:35:56.227719] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.311 15:35:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:43.575 15:35:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1293216 00:17:43.575 15:35:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:43.575 15:35:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:43.575 15:35:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1293216 /var/tmp/bdevperf.sock 00:17:43.575 15:35:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 1293216 ']' 00:17:43.575 15:35:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:43.575 15:35:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:43.575 15:35:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:43.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:43.575 15:35:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:43.575 15:35:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:43.575 [2024-05-15 15:35:56.531927] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:17:43.575 [2024-05-15 15:35:56.532000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1293216 ] 00:17:43.575 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.575 [2024-05-15 15:35:56.568876] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:43.575 [2024-05-15 15:35:56.603476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.876 [2024-05-15 15:35:56.692571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.876 15:35:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:43.876 15:35:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:17:43.876 15:35:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:44.133 Nvme0n1 00:17:44.133 15:35:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:44.391 [ 00:17:44.391 { 00:17:44.391 "name": "Nvme0n1", 00:17:44.391 "aliases": [ 00:17:44.391 "69b48c1e-f668-4515-87d9-b03b1d7eba47" 00:17:44.391 ], 00:17:44.391 "product_name": "NVMe disk", 00:17:44.391 "block_size": 4096, 00:17:44.391 "num_blocks": 38912, 00:17:44.391 "uuid": "69b48c1e-f668-4515-87d9-b03b1d7eba47", 00:17:44.391 "assigned_rate_limits": { 00:17:44.391 "rw_ios_per_sec": 0, 00:17:44.391 "rw_mbytes_per_sec": 0, 00:17:44.391 "r_mbytes_per_sec": 0, 00:17:44.391 "w_mbytes_per_sec": 0 00:17:44.391 }, 00:17:44.391 "claimed": false, 00:17:44.391 "zoned": false, 00:17:44.391 "supported_io_types": { 00:17:44.391 "read": true, 00:17:44.391 "write": true, 00:17:44.391 "unmap": true, 00:17:44.391 "write_zeroes": true, 00:17:44.391 "flush": true, 00:17:44.391 "reset": true, 00:17:44.391 "compare": true, 00:17:44.391 "compare_and_write": true, 00:17:44.391 "abort": true, 00:17:44.391 "nvme_admin": true, 00:17:44.391 "nvme_io": true 00:17:44.391 }, 00:17:44.391 "memory_domains": [ 00:17:44.391 { 00:17:44.391 "dma_device_id": "system", 00:17:44.391 "dma_device_type": 1 00:17:44.391 } 00:17:44.391 ], 00:17:44.391 "driver_specific": { 00:17:44.391 "nvme": [ 00:17:44.391 { 00:17:44.391 "trid": { 00:17:44.391 "trtype": "TCP", 00:17:44.391 "adrfam": "IPv4", 00:17:44.391 "traddr": "10.0.0.2", 00:17:44.391 "trsvcid": "4420", 00:17:44.391 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:44.391 }, 00:17:44.391 "ctrlr_data": { 00:17:44.391 "cntlid": 1, 00:17:44.391 "vendor_id": "0x8086", 00:17:44.391 "model_number": "SPDK bdev Controller", 00:17:44.391 "serial_number": "SPDK0", 00:17:44.391 "firmware_revision": "24.05", 00:17:44.391 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:44.391 "oacs": { 00:17:44.391 "security": 0, 00:17:44.391 "format": 0, 00:17:44.391 "firmware": 0, 00:17:44.391 "ns_manage": 0 00:17:44.391 }, 00:17:44.391 "multi_ctrlr": true, 00:17:44.391 "ana_reporting": false 00:17:44.391 }, 00:17:44.391 "vs": { 00:17:44.391 "nvme_version": "1.3" 00:17:44.391 }, 00:17:44.391 "ns_data": { 00:17:44.391 "id": 1, 00:17:44.391 "can_share": true 00:17:44.391 } 00:17:44.391 } 00:17:44.391 ], 00:17:44.391 "mp_policy": "active_passive" 00:17:44.391 } 00:17:44.391 } 00:17:44.391 ] 00:17:44.391 15:35:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1293352 00:17:44.391 15:35:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:44.391 15:35:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:44.649 Running I/O for 10 seconds... 00:17:45.582 Latency(us) 00:17:45.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.582 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.582 Nvme0n1 : 1.00 13655.00 53.34 0.00 0.00 0.00 0.00 0.00 00:17:45.582 =================================================================================================================== 00:17:45.582 Total : 13655.00 53.34 0.00 0.00 0.00 0.00 0.00 00:17:45.582 00:17:46.514 15:35:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9aa61367-ba82-4c33-b081-f60d0d872bdf 00:17:46.514 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:46.514 Nvme0n1 : 2.00 14066.50 54.95 0.00 0.00 0.00 0.00 0.00 00:17:46.514 =================================================================================================================== 00:17:46.514 Total : 14066.50 54.95 0.00 0.00 0.00 0.00 0.00 00:17:46.514 00:17:46.771 true 00:17:46.771 15:35:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9aa61367-ba82-4c33-b081-f60d0d872bdf 00:17:46.772 15:35:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:47.029 15:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:47.029 15:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:47.029 15:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1293352 00:17:47.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:47.595 Nvme0n1 : 3.00 14059.00 54.92 0.00 0.00 0.00 0.00 0.00 00:17:47.595 =================================================================================================================== 00:17:47.595 Total : 14059.00 54.92 0.00 0.00 0.00 0.00 0.00 00:17:47.595 00:17:48.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:48.527 Nvme0n1 : 4.00 14227.25 55.58 0.00 0.00 0.00 0.00 0.00 00:17:48.527 =================================================================================================================== 00:17:48.527 Total : 14227.25 55.58 0.00 0.00 0.00 0.00 0.00 00:17:48.527 00:17:49.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:49.900 Nvme0n1 : 5.00 14278.00 55.77 0.00 0.00 0.00 0.00 0.00 00:17:49.900 =================================================================================================================== 00:17:49.900 Total : 14278.00 55.77 0.00 0.00 0.00 0.00 0.00 00:17:49.900 00:17:50.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:50.834 Nvme0n1 : 6.00 14307.33 55.89 0.00 0.00 0.00 0.00 0.00 00:17:50.834 =================================================================================================================== 00:17:50.834 Total : 14307.33 55.89 0.00 0.00 0.00 0.00 0.00 00:17:50.834 00:17:51.767 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:51.767 Nvme0n1 : 7.00 14292.29 55.83 0.00 0.00 0.00 0.00 0.00 00:17:51.767 =================================================================================================================== 00:17:51.767 Total : 14292.29 55.83 0.00 0.00 0.00 0.00 0.00 00:17:51.767 00:17:52.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:52.700 Nvme0n1 : 8.00 14294.12 55.84 0.00 0.00 0.00 0.00 0.00 00:17:52.700 =================================================================================================================== 00:17:52.700 Total : 14294.12 55.84 0.00 0.00 0.00 0.00 0.00 00:17:52.700 00:17:53.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:53.633 Nvme0n1 : 9.00 14356.89 56.08 0.00 0.00 0.00 0.00 0.00 00:17:53.633 =================================================================================================================== 00:17:53.633 Total : 14356.89 56.08 0.00 0.00 0.00 0.00 0.00 00:17:53.633 00:17:54.564 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:54.565 Nvme0n1 : 10.00 14401.80 56.26 0.00 0.00 0.00 0.00 0.00 00:17:54.565 =================================================================================================================== 00:17:54.565 Total : 14401.80 56.26 0.00 0.00 0.00 0.00 0.00 00:17:54.565 00:17:54.565 00:17:54.565 Latency(us) 00:17:54.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.565 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:54.565 Nvme0n1 : 10.01 14403.48 56.26 0.00 0.00 8881.07 4733.16 17185.00 00:17:54.565 =================================================================================================================== 00:17:54.565 Total : 14403.48 56.26 0.00 0.00 8881.07 4733.16 17185.00 00:17:54.565 0 00:17:54.565 15:36:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1293216 00:17:54.565 15:36:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 1293216 ']' 00:17:54.565 15:36:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 1293216 00:17:54.565 15:36:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:17:54.565 15:36:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:54.565 15:36:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1293216 00:17:54.565 15:36:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:54.565 15:36:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:54.565 15:36:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1293216' 00:17:54.565 killing process with pid 1293216 00:17:54.565 15:36:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 1293216 00:17:54.565 Received shutdown signal, test time was about 10.000000 seconds 00:17:54.565 00:17:54.565 Latency(us) 00:17:54.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.565 =================================================================================================================== 00:17:54.565 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:54.565 15:36:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 1293216 00:17:54.821 15:36:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:55.078 15:36:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:55.335 15:36:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9aa61367-ba82-4c33-b081-f60d0d872bdf 00:17:55.335 15:36:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:55.592 15:36:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:55.592 15:36:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:55.592 15:36:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:55.850 [2024-05-15 15:36:08.892009] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:55.850 15:36:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9aa61367-ba82-4c33-b081-f60d0d872bdf 00:17:55.850 15:36:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:55.850 15:36:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9aa61367-ba82-4c33-b081-f60d0d872bdf 00:17:55.850 15:36:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:55.850 15:36:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:55.850 15:36:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:55.850 15:36:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:55.850 15:36:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:55.850 15:36:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:55.850 15:36:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:55.850 15:36:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:55.850 15:36:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9aa61367-ba82-4c33-b081-f60d0d872bdf 00:17:56.107 request: 00:17:56.107 { 00:17:56.107 "uuid": "9aa61367-ba82-4c33-b081-f60d0d872bdf", 00:17:56.107 "method": "bdev_lvol_get_lvstores", 00:17:56.107 "req_id": 1 00:17:56.107 } 00:17:56.107 Got JSON-RPC error response 00:17:56.107 response: 00:17:56.107 { 00:17:56.107 "code": -19, 00:17:56.107 "message": "No such device" 00:17:56.107 } 00:17:56.107 15:36:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:56.107 15:36:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:56.107 15:36:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:56.107 15:36:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:56.107 15:36:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:56.365 aio_bdev 00:17:56.365 15:36:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 69b48c1e-f668-4515-87d9-b03b1d7eba47 00:17:56.365 15:36:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=69b48c1e-f668-4515-87d9-b03b1d7eba47 00:17:56.365 15:36:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:56.365 15:36:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:17:56.365 15:36:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:56.365 15:36:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:56.365 15:36:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:56.622 15:36:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 69b48c1e-f668-4515-87d9-b03b1d7eba47 -t 2000 00:17:56.879 [ 00:17:56.879 { 00:17:56.879 "name": "69b48c1e-f668-4515-87d9-b03b1d7eba47", 00:17:56.879 "aliases": [ 00:17:56.879 "lvs/lvol" 00:17:56.879 ], 00:17:56.879 "product_name": "Logical Volume", 00:17:56.879 "block_size": 4096, 00:17:56.879 "num_blocks": 38912, 00:17:56.879 "uuid": "69b48c1e-f668-4515-87d9-b03b1d7eba47", 00:17:56.879 "assigned_rate_limits": { 00:17:56.879 "rw_ios_per_sec": 0, 00:17:56.879 "rw_mbytes_per_sec": 0, 00:17:56.879 "r_mbytes_per_sec": 0, 00:17:56.879 "w_mbytes_per_sec": 0 00:17:56.879 }, 00:17:56.879 "claimed": false, 00:17:56.879 "zoned": false, 00:17:56.879 "supported_io_types": { 00:17:56.879 "read": true, 00:17:56.879 "write": true, 00:17:56.879 "unmap": true, 00:17:56.879 "write_zeroes": true, 00:17:56.879 "flush": false, 00:17:56.879 "reset": true, 00:17:56.879 "compare": false, 00:17:56.879 "compare_and_write": false, 00:17:56.879 "abort": false, 00:17:56.879 "nvme_admin": false, 00:17:56.879 "nvme_io": false 00:17:56.879 }, 00:17:56.879 "driver_specific": { 00:17:56.879 "lvol": { 00:17:56.879 "lvol_store_uuid": "9aa61367-ba82-4c33-b081-f60d0d872bdf", 00:17:56.879 "base_bdev": "aio_bdev", 00:17:56.879 "thin_provision": false, 00:17:56.879 "num_allocated_clusters": 38, 00:17:56.879 "snapshot": false, 00:17:56.879 "clone": false, 00:17:56.879 "esnap_clone": false 00:17:56.879 } 00:17:56.879 } 00:17:56.879 } 00:17:56.879 ] 00:17:56.879 15:36:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:17:56.879 15:36:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9aa61367-ba82-4c33-b081-f60d0d872bdf 00:17:56.879 15:36:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:57.135 15:36:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:57.135 15:36:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9aa61367-ba82-4c33-b081-f60d0d872bdf 00:17:57.135 15:36:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:57.393 15:36:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:57.393 15:36:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 69b48c1e-f668-4515-87d9-b03b1d7eba47 00:17:57.651 15:36:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9aa61367-ba82-4c33-b081-f60d0d872bdf 00:17:57.909 15:36:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:58.167 15:36:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:58.167 00:17:58.167 real 0m17.376s 00:17:58.167 user 0m16.839s 00:17:58.167 sys 0m1.912s 00:17:58.167 15:36:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:58.167 15:36:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:58.167 ************************************ 00:17:58.167 END TEST lvs_grow_clean 00:17:58.167 ************************************ 00:17:58.167 15:36:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:58.167 15:36:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:58.167 15:36:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:58.167 15:36:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:58.427 ************************************ 00:17:58.427 START TEST lvs_grow_dirty 00:17:58.427 ************************************ 00:17:58.427 15:36:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:17:58.427 15:36:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:58.427 15:36:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:58.427 15:36:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:58.427 15:36:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:58.427 15:36:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:58.427 15:36:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:58.427 15:36:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:58.427 15:36:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:58.427 15:36:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:58.684 15:36:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:58.684 15:36:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:58.942 15:36:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c2fbb296-bcbf-48c7-ae73-9cb84e832350 00:17:58.942 15:36:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c2fbb296-bcbf-48c7-ae73-9cb84e832350 00:17:58.942 15:36:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:58.942 15:36:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:58.942 15:36:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:58.942 15:36:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c2fbb296-bcbf-48c7-ae73-9cb84e832350 lvol 150 00:17:59.200 15:36:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=52b77a50-5217-4cc7-9a03-d7dcfd95d58a 00:17:59.200 15:36:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:59.200 15:36:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:59.458 [2024-05-15 15:36:12.501309] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:59.458 [2024-05-15 15:36:12.501376] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:59.458 true 00:17:59.459 15:36:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c2fbb296-bcbf-48c7-ae73-9cb84e832350 00:17:59.459 15:36:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:59.717 15:36:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:59.717 15:36:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:59.976 15:36:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 52b77a50-5217-4cc7-9a03-d7dcfd95d58a 00:18:00.249 15:36:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:00.544 [2024-05-15 15:36:13.488345] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.544 15:36:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:00.802 15:36:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1295885 00:18:00.802 15:36:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:00.802 15:36:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:00.802 15:36:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1295885 /var/tmp/bdevperf.sock 00:18:00.802 15:36:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 1295885 ']' 00:18:00.802 15:36:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:00.802 15:36:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:00.802 15:36:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:00.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:00.802 15:36:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:00.802 15:36:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:00.802 [2024-05-15 15:36:13.783303] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:18:00.802 [2024-05-15 15:36:13.783389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1295885 ] 00:18:00.802 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.802 [2024-05-15 15:36:13.819767] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:00.802 [2024-05-15 15:36:13.854837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.060 [2024-05-15 15:36:13.942127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.060 15:36:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:01.060 15:36:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:18:01.060 15:36:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:01.317 Nvme0n1 00:18:01.317 15:36:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:01.574 [ 00:18:01.574 { 00:18:01.574 "name": "Nvme0n1", 00:18:01.574 "aliases": [ 00:18:01.574 "52b77a50-5217-4cc7-9a03-d7dcfd95d58a" 00:18:01.574 ], 00:18:01.574 "product_name": "NVMe disk", 00:18:01.574 "block_size": 4096, 00:18:01.574 "num_blocks": 38912, 00:18:01.574 "uuid": "52b77a50-5217-4cc7-9a03-d7dcfd95d58a", 00:18:01.574 "assigned_rate_limits": { 00:18:01.574 "rw_ios_per_sec": 0, 00:18:01.574 "rw_mbytes_per_sec": 0, 00:18:01.574 "r_mbytes_per_sec": 0, 00:18:01.574 "w_mbytes_per_sec": 0 00:18:01.574 }, 00:18:01.574 "claimed": false, 00:18:01.574 "zoned": false, 00:18:01.574 "supported_io_types": { 00:18:01.574 "read": true, 00:18:01.574 "write": true, 00:18:01.574 "unmap": true, 00:18:01.574 "write_zeroes": true, 00:18:01.574 "flush": true, 00:18:01.574 "reset": true, 00:18:01.574 "compare": true, 00:18:01.574 "compare_and_write": true, 00:18:01.574 "abort": true, 00:18:01.574 "nvme_admin": true, 00:18:01.574 "nvme_io": true 00:18:01.574 }, 00:18:01.574 "memory_domains": [ 00:18:01.574 { 00:18:01.574 "dma_device_id": "system", 00:18:01.574 "dma_device_type": 1 00:18:01.574 } 00:18:01.574 ], 00:18:01.574 "driver_specific": { 00:18:01.574 "nvme": [ 00:18:01.574 { 00:18:01.574 "trid": { 00:18:01.574 "trtype": "TCP", 00:18:01.574 "adrfam": "IPv4", 00:18:01.574 "traddr": "10.0.0.2", 00:18:01.574 "trsvcid": "4420", 00:18:01.574 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:01.574 }, 00:18:01.574 "ctrlr_data": { 00:18:01.575 "cntlid": 1, 00:18:01.575 "vendor_id": "0x8086", 00:18:01.575 "model_number": "SPDK bdev Controller", 00:18:01.575 "serial_number": "SPDK0", 00:18:01.575 "firmware_revision": "24.05", 00:18:01.575 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:01.575 "oacs": { 00:18:01.575 "security": 0, 00:18:01.575 "format": 0, 00:18:01.575 "firmware": 0, 00:18:01.575 "ns_manage": 0 00:18:01.575 }, 00:18:01.575 "multi_ctrlr": true, 00:18:01.575 "ana_reporting": false 00:18:01.575 }, 00:18:01.575 "vs": { 00:18:01.575 "nvme_version": "1.3" 00:18:01.575 }, 00:18:01.575 "ns_data": { 00:18:01.575 "id": 1, 00:18:01.575 "can_share": true 00:18:01.575 } 00:18:01.575 } 00:18:01.575 ], 00:18:01.575 "mp_policy": "active_passive" 00:18:01.575 } 00:18:01.575 } 00:18:01.575 ] 00:18:01.575 15:36:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1296017 00:18:01.575 15:36:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:01.575 15:36:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:01.832 Running I/O for 10 seconds... 00:18:02.765 Latency(us) 00:18:02.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:02.765 Nvme0n1 : 1.00 14671.00 57.31 0.00 0.00 0.00 0.00 0.00 00:18:02.765 =================================================================================================================== 00:18:02.765 Total : 14671.00 57.31 0.00 0.00 0.00 0.00 0.00 00:18:02.765 00:18:03.697 15:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c2fbb296-bcbf-48c7-ae73-9cb84e832350 00:18:03.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:03.697 Nvme0n1 : 2.00 14765.00 57.68 0.00 0.00 0.00 0.00 0.00 00:18:03.697 =================================================================================================================== 00:18:03.697 Total : 14765.00 57.68 0.00 0.00 0.00 0.00 0.00 00:18:03.697 00:18:03.955 true 00:18:03.955 15:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c2fbb296-bcbf-48c7-ae73-9cb84e832350 00:18:03.955 15:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:04.213 15:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:04.213 15:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:04.213 15:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1296017 00:18:04.778 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:04.778 Nvme0n1 : 3.00 14818.33 57.88 0.00 0.00 0.00 0.00 0.00 00:18:04.778 =================================================================================================================== 00:18:04.778 Total : 14818.33 57.88 0.00 0.00 0.00 0.00 0.00 00:18:04.778 00:18:05.710 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:05.710 Nvme0n1 : 4.00 14923.75 58.30 0.00 0.00 0.00 0.00 0.00 00:18:05.710 =================================================================================================================== 00:18:05.710 Total : 14923.75 58.30 0.00 0.00 0.00 0.00 0.00 00:18:05.710 00:18:07.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:07.079 Nvme0n1 : 5.00 14910.80 58.25 0.00 0.00 0.00 0.00 0.00 00:18:07.079 =================================================================================================================== 00:18:07.079 Total : 14910.80 58.25 0.00 0.00 0.00 0.00 0.00 00:18:07.079 00:18:08.013 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:08.013 Nvme0n1 : 6.00 14860.33 58.05 0.00 0.00 0.00 0.00 0.00 00:18:08.013 =================================================================================================================== 00:18:08.013 Total : 14860.33 58.05 0.00 0.00 0.00 0.00 0.00 00:18:08.013 00:18:08.945 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:08.945 Nvme0n1 : 7.00 14915.00 58.26 0.00 0.00 0.00 0.00 0.00 00:18:08.945 =================================================================================================================== 00:18:08.945 Total : 14915.00 58.26 0.00 0.00 0.00 0.00 0.00 00:18:08.945 00:18:09.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:09.878 Nvme0n1 : 8.00 14939.75 58.36 0.00 0.00 0.00 0.00 0.00 00:18:09.878 =================================================================================================================== 00:18:09.878 Total : 14939.75 58.36 0.00 0.00 0.00 0.00 0.00 00:18:09.878 00:18:10.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:10.811 Nvme0n1 : 9.00 15001.67 58.60 0.00 0.00 0.00 0.00 0.00 00:18:10.811 =================================================================================================================== 00:18:10.811 Total : 15001.67 58.60 0.00 0.00 0.00 0.00 0.00 00:18:10.811 00:18:11.744 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:11.744 Nvme0n1 : 10.00 15000.10 58.59 0.00 0.00 0.00 0.00 0.00 00:18:11.744 =================================================================================================================== 00:18:11.744 Total : 15000.10 58.59 0.00 0.00 0.00 0.00 0.00 00:18:11.744 00:18:11.744 00:18:11.744 Latency(us) 00:18:11.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.744 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:11.744 Nvme0n1 : 10.01 15000.10 58.59 0.00 0.00 8528.17 4975.88 20971.52 00:18:11.744 =================================================================================================================== 00:18:11.744 Total : 15000.10 58.59 0.00 0.00 8528.17 4975.88 20971.52 00:18:11.744 0 00:18:11.744 15:36:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1295885 00:18:11.744 15:36:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 1295885 ']' 00:18:11.744 15:36:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 1295885 00:18:11.744 15:36:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:18:11.744 15:36:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:11.744 15:36:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1295885 00:18:11.744 15:36:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:11.745 15:36:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:11.745 15:36:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1295885' 00:18:11.745 killing process with pid 1295885 00:18:11.745 15:36:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 1295885 00:18:11.745 Received shutdown signal, test time was about 10.000000 seconds 00:18:11.745 00:18:11.745 Latency(us) 00:18:11.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.745 =================================================================================================================== 00:18:11.745 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:11.745 15:36:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 1295885 00:18:12.003 15:36:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:12.261 15:36:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:12.827 15:36:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c2fbb296-bcbf-48c7-ae73-9cb84e832350 00:18:12.827 15:36:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:12.827 15:36:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:12.827 15:36:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:18:12.827 15:36:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1292781 00:18:12.827 15:36:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1292781 00:18:12.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1292781 Killed "${NVMF_APP[@]}" "$@" 00:18:12.827 15:36:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:18:12.827 15:36:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:18:12.827 15:36:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:12.827 15:36:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:12.827 15:36:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:12.827 15:36:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1297338 00:18:12.827 15:36:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:12.827 15:36:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1297338 00:18:12.827 15:36:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 1297338 ']' 00:18:12.827 15:36:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.827 15:36:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:12.827 15:36:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.827 15:36:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:12.827 15:36:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:13.086 [2024-05-15 15:36:25.958568] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:18:13.086 [2024-05-15 15:36:25.958666] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.086 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.086 [2024-05-15 15:36:26.003674] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:13.086 [2024-05-15 15:36:26.034146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.086 [2024-05-15 15:36:26.117130] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.086 [2024-05-15 15:36:26.117194] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.086 [2024-05-15 15:36:26.117228] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.086 [2024-05-15 15:36:26.117242] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.086 [2024-05-15 15:36:26.117263] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.086 [2024-05-15 15:36:26.117289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.344 15:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:13.344 15:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:18:13.344 15:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:13.344 15:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:13.344 15:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:13.344 15:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:13.344 15:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:13.602 [2024-05-15 15:36:26.525928] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:13.602 [2024-05-15 15:36:26.526068] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:13.602 [2024-05-15 15:36:26.526118] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:13.602 15:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:18:13.602 15:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 52b77a50-5217-4cc7-9a03-d7dcfd95d58a 00:18:13.602 15:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=52b77a50-5217-4cc7-9a03-d7dcfd95d58a 00:18:13.602 15:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:13.602 15:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:18:13.602 15:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:13.602 15:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:13.602 15:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:13.859 15:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 52b77a50-5217-4cc7-9a03-d7dcfd95d58a -t 2000 00:18:14.116 [ 00:18:14.116 { 00:18:14.116 "name": "52b77a50-5217-4cc7-9a03-d7dcfd95d58a", 00:18:14.116 "aliases": [ 00:18:14.116 "lvs/lvol" 00:18:14.116 ], 00:18:14.116 "product_name": "Logical Volume", 00:18:14.116 "block_size": 4096, 00:18:14.116 "num_blocks": 38912, 00:18:14.116 "uuid": "52b77a50-5217-4cc7-9a03-d7dcfd95d58a", 00:18:14.116 "assigned_rate_limits": { 00:18:14.116 "rw_ios_per_sec": 0, 00:18:14.116 "rw_mbytes_per_sec": 0, 00:18:14.116 "r_mbytes_per_sec": 0, 00:18:14.116 "w_mbytes_per_sec": 0 00:18:14.116 }, 00:18:14.116 "claimed": false, 00:18:14.116 "zoned": false, 00:18:14.116 "supported_io_types": { 00:18:14.116 "read": true, 00:18:14.116 "write": true, 00:18:14.116 "unmap": true, 00:18:14.116 "write_zeroes": true, 00:18:14.116 "flush": false, 00:18:14.116 "reset": true, 00:18:14.116 "compare": false, 00:18:14.116 "compare_and_write": false, 00:18:14.116 "abort": false, 00:18:14.116 "nvme_admin": false, 00:18:14.117 "nvme_io": false 00:18:14.117 }, 00:18:14.117 "driver_specific": { 00:18:14.117 "lvol": { 00:18:14.117 "lvol_store_uuid": "c2fbb296-bcbf-48c7-ae73-9cb84e832350", 00:18:14.117 "base_bdev": "aio_bdev", 00:18:14.117 "thin_provision": false, 00:18:14.117 "num_allocated_clusters": 38, 00:18:14.117 "snapshot": false, 00:18:14.117 "clone": false, 00:18:14.117 "esnap_clone": false 00:18:14.117 } 00:18:14.117 } 00:18:14.117 } 00:18:14.117 ] 00:18:14.117 15:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:18:14.117 15:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c2fbb296-bcbf-48c7-ae73-9cb84e832350 00:18:14.117 15:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:18:14.374 15:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:18:14.374 15:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c2fbb296-bcbf-48c7-ae73-9cb84e832350 00:18:14.374 15:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:18:14.632 15:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:18:14.632 15:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:14.889 [2024-05-15 15:36:27.863456] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:14.889 15:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c2fbb296-bcbf-48c7-ae73-9cb84e832350 00:18:14.889 15:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:18:14.889 15:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c2fbb296-bcbf-48c7-ae73-9cb84e832350 00:18:14.889 15:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:14.889 15:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.889 15:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:14.889 15:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.889 15:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:14.889 15:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.889 15:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:14.889 15:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:14.889 15:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c2fbb296-bcbf-48c7-ae73-9cb84e832350 00:18:15.146 request: 00:18:15.146 { 00:18:15.146 "uuid": "c2fbb296-bcbf-48c7-ae73-9cb84e832350", 00:18:15.146 "method": "bdev_lvol_get_lvstores", 00:18:15.146 "req_id": 1 00:18:15.146 } 00:18:15.146 Got JSON-RPC error response 00:18:15.146 response: 00:18:15.146 { 00:18:15.146 "code": -19, 00:18:15.146 "message": "No such device" 00:18:15.146 } 00:18:15.146 15:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:18:15.146 15:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:15.146 15:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:15.146 15:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:15.146 15:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:15.404 aio_bdev 00:18:15.404 15:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 52b77a50-5217-4cc7-9a03-d7dcfd95d58a 00:18:15.404 15:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=52b77a50-5217-4cc7-9a03-d7dcfd95d58a 00:18:15.404 15:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:15.404 15:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:18:15.404 15:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:15.404 15:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:15.404 15:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:15.663 15:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 52b77a50-5217-4cc7-9a03-d7dcfd95d58a -t 2000 00:18:15.920 [ 00:18:15.920 { 00:18:15.920 "name": "52b77a50-5217-4cc7-9a03-d7dcfd95d58a", 00:18:15.920 "aliases": [ 00:18:15.920 "lvs/lvol" 00:18:15.920 ], 00:18:15.920 "product_name": "Logical Volume", 00:18:15.920 "block_size": 4096, 00:18:15.920 "num_blocks": 38912, 00:18:15.920 "uuid": "52b77a50-5217-4cc7-9a03-d7dcfd95d58a", 00:18:15.920 "assigned_rate_limits": { 00:18:15.920 "rw_ios_per_sec": 0, 00:18:15.920 "rw_mbytes_per_sec": 0, 00:18:15.920 "r_mbytes_per_sec": 0, 00:18:15.920 "w_mbytes_per_sec": 0 00:18:15.920 }, 00:18:15.920 "claimed": false, 00:18:15.920 "zoned": false, 00:18:15.920 "supported_io_types": { 00:18:15.920 "read": true, 00:18:15.920 "write": true, 00:18:15.920 "unmap": true, 00:18:15.920 "write_zeroes": true, 00:18:15.920 "flush": false, 00:18:15.920 "reset": true, 00:18:15.920 "compare": false, 00:18:15.920 "compare_and_write": false, 00:18:15.920 "abort": false, 00:18:15.920 "nvme_admin": false, 00:18:15.920 "nvme_io": false 00:18:15.920 }, 00:18:15.920 "driver_specific": { 00:18:15.920 "lvol": { 00:18:15.920 "lvol_store_uuid": "c2fbb296-bcbf-48c7-ae73-9cb84e832350", 00:18:15.920 "base_bdev": "aio_bdev", 00:18:15.920 "thin_provision": false, 00:18:15.920 "num_allocated_clusters": 38, 00:18:15.920 "snapshot": false, 00:18:15.920 "clone": false, 00:18:15.920 "esnap_clone": false 00:18:15.920 } 00:18:15.920 } 00:18:15.920 } 00:18:15.920 ] 00:18:15.920 15:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:18:15.920 15:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c2fbb296-bcbf-48c7-ae73-9cb84e832350 00:18:15.920 15:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:16.177 15:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:16.177 15:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c2fbb296-bcbf-48c7-ae73-9cb84e832350 00:18:16.177 15:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:16.433 15:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:16.433 15:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 52b77a50-5217-4cc7-9a03-d7dcfd95d58a 00:18:16.691 15:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c2fbb296-bcbf-48c7-ae73-9cb84e832350 00:18:16.976 15:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:17.234 15:36:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:17.234 00:18:17.234 real 0m18.937s 00:18:17.234 user 0m47.908s 00:18:17.234 sys 0m4.595s 00:18:17.234 15:36:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:17.234 15:36:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:17.234 ************************************ 00:18:17.234 END TEST lvs_grow_dirty 00:18:17.234 ************************************ 00:18:17.234 15:36:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:17.234 15:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:18:17.234 15:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:18:17.234 15:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:18:17.234 15:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:17.234 15:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:18:17.234 15:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:18:17.234 15:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:18:17.234 15:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:17.234 nvmf_trace.0 00:18:17.234 15:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:18:17.234 15:36:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:17.234 15:36:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:17.234 15:36:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:18:17.234 15:36:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:17.234 15:36:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:18:17.234 15:36:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:17.234 15:36:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:17.234 rmmod nvme_tcp 00:18:17.234 rmmod nvme_fabrics 00:18:17.234 rmmod nvme_keyring 00:18:17.491 15:36:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:17.491 15:36:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:18:17.491 15:36:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:18:17.491 15:36:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1297338 ']' 00:18:17.491 15:36:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1297338 00:18:17.491 15:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 1297338 ']' 00:18:17.491 15:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 1297338 00:18:17.491 15:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:18:17.491 15:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:17.491 15:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1297338 00:18:17.491 15:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:17.491 15:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:17.491 15:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1297338' 00:18:17.491 killing process with pid 1297338 00:18:17.491 15:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 1297338 00:18:17.491 15:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 1297338 00:18:17.749 15:36:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:17.749 15:36:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:17.749 15:36:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:17.749 15:36:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:17.749 15:36:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:17.749 15:36:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.749 15:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.749 15:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.649 15:36:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:19.649 00:18:19.649 real 0m42.126s 00:18:19.649 user 1m10.719s 00:18:19.649 sys 0m8.684s 00:18:19.649 15:36:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:19.649 15:36:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:19.649 ************************************ 00:18:19.649 END TEST nvmf_lvs_grow 00:18:19.649 ************************************ 00:18:19.649 15:36:32 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:19.649 15:36:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:19.649 15:36:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:19.649 15:36:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:19.649 ************************************ 00:18:19.649 START TEST nvmf_bdev_io_wait 00:18:19.649 ************************************ 00:18:19.649 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:19.649 * Looking for test storage... 00:18:19.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:19.649 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:19.649 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:18:19.649 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.649 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.649 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.649 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.649 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.649 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.907 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.907 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.907 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.907 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.907 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:19.907 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:19.907 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.907 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.907 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:19.907 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:19.907 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:19.907 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.907 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.907 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.907 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:18:19.908 15:36:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:22.438 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:22.438 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:18:22.438 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:22.438 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:22.438 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:22.439 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:22.439 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:22.439 Found net devices under 0000:09:00.0: cvl_0_0 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:22.439 Found net devices under 0000:09:00.1: cvl_0_1 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:22.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:22.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:18:22.439 00:18:22.439 --- 10.0.0.2 ping statistics --- 00:18:22.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.439 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:22.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:22.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:18:22.439 00:18:22.439 --- 10.0.0.1 ping statistics --- 00:18:22.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.439 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1300161 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1300161 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 1300161 ']' 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:22.439 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:22.439 [2024-05-15 15:36:35.421764] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:18:22.440 [2024-05-15 15:36:35.421860] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.440 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.440 [2024-05-15 15:36:35.466064] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:22.440 [2024-05-15 15:36:35.496929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:22.697 [2024-05-15 15:36:35.588345] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.697 [2024-05-15 15:36:35.588397] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.697 [2024-05-15 15:36:35.588426] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.697 [2024-05-15 15:36:35.588437] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.697 [2024-05-15 15:36:35.588448] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.697 [2024-05-15 15:36:35.588611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.697 [2024-05-15 15:36:35.588661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:22.697 [2024-05-15 15:36:35.588689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:22.697 [2024-05-15 15:36:35.588691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:22.697 [2024-05-15 15:36:35.741651] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:22.697 Malloc0 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.697 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:22.956 [2024-05-15 15:36:35.802605] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:22.956 [2024-05-15 15:36:35.802910] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1300187 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1300188 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1300191 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:22.956 { 00:18:22.956 "params": { 00:18:22.956 "name": "Nvme$subsystem", 00:18:22.956 "trtype": "$TEST_TRANSPORT", 00:18:22.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:22.956 "adrfam": "ipv4", 00:18:22.956 "trsvcid": "$NVMF_PORT", 00:18:22.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:22.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:22.956 "hdgst": ${hdgst:-false}, 00:18:22.956 "ddgst": ${ddgst:-false} 00:18:22.956 }, 00:18:22.956 "method": "bdev_nvme_attach_controller" 00:18:22.956 } 00:18:22.956 EOF 00:18:22.956 )") 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1300193 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:22.956 { 00:18:22.956 "params": { 00:18:22.956 "name": "Nvme$subsystem", 00:18:22.956 "trtype": "$TEST_TRANSPORT", 00:18:22.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:22.956 "adrfam": "ipv4", 00:18:22.956 "trsvcid": "$NVMF_PORT", 00:18:22.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:22.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:22.956 "hdgst": ${hdgst:-false}, 00:18:22.956 "ddgst": ${ddgst:-false} 00:18:22.956 }, 00:18:22.956 "method": "bdev_nvme_attach_controller" 00:18:22.956 } 00:18:22.956 EOF 00:18:22.956 )") 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:22.956 { 00:18:22.956 "params": { 00:18:22.956 "name": "Nvme$subsystem", 00:18:22.956 "trtype": "$TEST_TRANSPORT", 00:18:22.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:22.956 "adrfam": "ipv4", 00:18:22.956 "trsvcid": "$NVMF_PORT", 00:18:22.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:22.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:22.956 "hdgst": ${hdgst:-false}, 00:18:22.956 "ddgst": ${ddgst:-false} 00:18:22.956 }, 00:18:22.956 "method": "bdev_nvme_attach_controller" 00:18:22.956 } 00:18:22.956 EOF 00:18:22.956 )") 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:22.956 { 00:18:22.956 "params": { 00:18:22.956 "name": "Nvme$subsystem", 00:18:22.956 "trtype": "$TEST_TRANSPORT", 00:18:22.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:22.956 "adrfam": "ipv4", 00:18:22.956 "trsvcid": "$NVMF_PORT", 00:18:22.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:22.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:22.956 "hdgst": ${hdgst:-false}, 00:18:22.956 "ddgst": ${ddgst:-false} 00:18:22.956 }, 00:18:22.956 "method": "bdev_nvme_attach_controller" 00:18:22.956 } 00:18:22.956 EOF 00:18:22.956 )") 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1300187 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:22.956 "params": { 00:18:22.956 "name": "Nvme1", 00:18:22.956 "trtype": "tcp", 00:18:22.956 "traddr": "10.0.0.2", 00:18:22.956 "adrfam": "ipv4", 00:18:22.956 "trsvcid": "4420", 00:18:22.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:22.956 "hdgst": false, 00:18:22.956 "ddgst": false 00:18:22.956 }, 00:18:22.956 "method": "bdev_nvme_attach_controller" 00:18:22.956 }' 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:22.956 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:22.956 "params": { 00:18:22.956 "name": "Nvme1", 00:18:22.956 "trtype": "tcp", 00:18:22.956 "traddr": "10.0.0.2", 00:18:22.956 "adrfam": "ipv4", 00:18:22.956 "trsvcid": "4420", 00:18:22.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:22.957 "hdgst": false, 00:18:22.957 "ddgst": false 00:18:22.957 }, 00:18:22.957 "method": "bdev_nvme_attach_controller" 00:18:22.957 }' 00:18:22.957 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:22.957 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:22.957 "params": { 00:18:22.957 "name": "Nvme1", 00:18:22.957 "trtype": "tcp", 00:18:22.957 "traddr": "10.0.0.2", 00:18:22.957 "adrfam": "ipv4", 00:18:22.957 "trsvcid": "4420", 00:18:22.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:22.957 "hdgst": false, 00:18:22.957 "ddgst": false 00:18:22.957 }, 00:18:22.957 "method": "bdev_nvme_attach_controller" 00:18:22.957 }' 00:18:22.957 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:22.957 15:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:22.957 "params": { 00:18:22.957 "name": "Nvme1", 00:18:22.957 "trtype": "tcp", 00:18:22.957 "traddr": "10.0.0.2", 00:18:22.957 "adrfam": "ipv4", 00:18:22.957 "trsvcid": "4420", 00:18:22.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:22.957 "hdgst": false, 00:18:22.957 "ddgst": false 00:18:22.957 }, 00:18:22.957 "method": "bdev_nvme_attach_controller" 00:18:22.957 }' 00:18:22.957 [2024-05-15 15:36:35.850706] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:18:22.957 [2024-05-15 15:36:35.850703] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:18:22.957 [2024-05-15 15:36:35.850703] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:18:22.957 [2024-05-15 15:36:35.850707] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:18:22.957 [2024-05-15 15:36:35.850796] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 15:36:35.850796] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 15:36:35.850797] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 15:36:35.850798] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:22.957 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:22.957 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:22.957 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:22.957 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.957 [2024-05-15 15:36:36.008018] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:22.957 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.957 [2024-05-15 15:36:36.041622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.214 [2024-05-15 15:36:36.110872] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:23.214 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.214 [2024-05-15 15:36:36.118420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:23.214 [2024-05-15 15:36:36.145390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.214 [2024-05-15 15:36:36.211960] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:23.214 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.214 [2024-05-15 15:36:36.221878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:23.214 [2024-05-15 15:36:36.247186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.214 [2024-05-15 15:36:36.287889] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:23.472 [2024-05-15 15:36:36.323083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.472 [2024-05-15 15:36:36.328315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:23.472 [2024-05-15 15:36:36.390471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:23.472 Running I/O for 1 seconds... 00:18:23.729 Running I/O for 1 seconds... 00:18:23.729 Running I/O for 1 seconds... 00:18:23.729 Running I/O for 1 seconds... 00:18:24.662 00:18:24.662 Latency(us) 00:18:24.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.662 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:24.662 Nvme1n1 : 1.01 13186.22 51.51 0.00 0.00 9674.50 5461.33 20097.71 00:18:24.662 =================================================================================================================== 00:18:24.662 Total : 13186.22 51.51 0.00 0.00 9674.50 5461.33 20097.71 00:18:24.662 00:18:24.662 Latency(us) 00:18:24.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.662 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:24.662 Nvme1n1 : 1.02 5130.85 20.04 0.00 0.00 24638.29 9514.86 35340.89 00:18:24.662 =================================================================================================================== 00:18:24.662 Total : 5130.85 20.04 0.00 0.00 24638.29 9514.86 35340.89 00:18:24.662 00:18:24.662 Latency(us) 00:18:24.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.662 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:24.662 Nvme1n1 : 1.00 188159.09 735.00 0.00 0.00 677.57 263.96 922.36 00:18:24.662 =================================================================================================================== 00:18:24.662 Total : 188159.09 735.00 0.00 0.00 677.57 263.96 922.36 00:18:24.662 00:18:24.662 Latency(us) 00:18:24.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.662 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:24.662 Nvme1n1 : 1.01 5332.96 20.83 0.00 0.00 23887.95 8786.68 53982.25 00:18:24.662 =================================================================================================================== 00:18:24.662 Total : 5332.96 20.83 0.00 0.00 23887.95 8786.68 53982.25 00:18:24.920 15:36:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1300188 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1300191 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1300193 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:25.177 rmmod nvme_tcp 00:18:25.177 rmmod nvme_fabrics 00:18:25.177 rmmod nvme_keyring 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1300161 ']' 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1300161 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 1300161 ']' 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 1300161 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1300161 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1300161' 00:18:25.177 killing process with pid 1300161 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 1300161 00:18:25.177 [2024-05-15 15:36:38.119404] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:25.177 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 1300161 00:18:25.436 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:25.436 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:25.436 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:25.436 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:25.436 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:25.436 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.436 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.436 15:36:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:27.337 15:36:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:27.337 00:18:27.337 real 0m7.687s 00:18:27.337 user 0m17.295s 00:18:27.337 sys 0m3.830s 00:18:27.337 15:36:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:27.337 15:36:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:27.337 ************************************ 00:18:27.337 END TEST nvmf_bdev_io_wait 00:18:27.337 ************************************ 00:18:27.337 15:36:40 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:27.337 15:36:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:27.337 15:36:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:27.337 15:36:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:27.337 ************************************ 00:18:27.337 START TEST nvmf_queue_depth 00:18:27.337 ************************************ 00:18:27.337 15:36:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:27.595 * Looking for test storage... 00:18:27.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:18:27.595 15:36:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:30.125 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:30.125 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:18:30.125 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:30.125 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:30.125 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:30.125 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:30.125 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:30.125 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:18:30.125 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:30.125 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:18:30.125 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:18:30.125 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:18:30.125 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:30.126 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:30.126 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:30.126 Found net devices under 0000:09:00.0: cvl_0_0 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:30.126 Found net devices under 0000:09:00.1: cvl_0_1 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:30.126 15:36:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:30.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:30.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:18:30.126 00:18:30.126 --- 10.0.0.2 ping statistics --- 00:18:30.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.126 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:30.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:30.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:18:30.126 00:18:30.126 --- 10.0.0.1 ping statistics --- 00:18:30.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.126 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1302813 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1302813 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 1302813 ']' 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:30.126 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:30.126 [2024-05-15 15:36:43.093749] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:18:30.126 [2024-05-15 15:36:43.093847] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.126 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.126 [2024-05-15 15:36:43.138676] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:30.126 [2024-05-15 15:36:43.170074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.384 [2024-05-15 15:36:43.256360] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.384 [2024-05-15 15:36:43.256417] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.384 [2024-05-15 15:36:43.256445] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.384 [2024-05-15 15:36:43.256457] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.384 [2024-05-15 15:36:43.256467] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.384 [2024-05-15 15:36:43.256510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:30.384 [2024-05-15 15:36:43.392925] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:30.384 Malloc0 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:30.384 [2024-05-15 15:36:43.453595] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:30.384 [2024-05-15 15:36:43.453895] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1302843 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1302843 /var/tmp/bdevperf.sock 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 1302843 ']' 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:30.384 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:30.642 [2024-05-15 15:36:43.499006] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:18:30.642 [2024-05-15 15:36:43.499082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1302843 ] 00:18:30.642 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.642 [2024-05-15 15:36:43.536648] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:30.642 [2024-05-15 15:36:43.572002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.642 [2024-05-15 15:36:43.664301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.899 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:30.899 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:18:30.899 15:36:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:30.899 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.899 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:30.900 NVMe0n1 00:18:30.900 15:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.900 15:36:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:30.900 Running I/O for 10 seconds... 00:18:43.096 00:18:43.096 Latency(us) 00:18:43.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.096 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:43.096 Verification LBA range: start 0x0 length 0x4000 00:18:43.096 NVMe0n1 : 10.06 8540.50 33.36 0.00 0.00 119413.29 12815.93 79225.74 00:18:43.096 =================================================================================================================== 00:18:43.096 Total : 8540.50 33.36 0.00 0.00 119413.29 12815.93 79225.74 00:18:43.096 0 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1302843 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 1302843 ']' 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 1302843 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1302843 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1302843' 00:18:43.096 killing process with pid 1302843 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 1302843 00:18:43.096 Received shutdown signal, test time was about 10.000000 seconds 00:18:43.096 00:18:43.096 Latency(us) 00:18:43.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.096 =================================================================================================================== 00:18:43.096 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 1302843 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:43.096 rmmod nvme_tcp 00:18:43.096 rmmod nvme_fabrics 00:18:43.096 rmmod nvme_keyring 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1302813 ']' 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1302813 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 1302813 ']' 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 1302813 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1302813 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1302813' 00:18:43.096 killing process with pid 1302813 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 1302813 00:18:43.096 [2024-05-15 15:36:54.424076] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 1302813 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:43.096 15:36:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.691 15:36:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:43.691 00:18:43.691 real 0m16.303s 00:18:43.691 user 0m22.509s 00:18:43.691 sys 0m3.273s 00:18:43.691 15:36:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:43.691 15:36:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:43.691 ************************************ 00:18:43.691 END TEST nvmf_queue_depth 00:18:43.691 ************************************ 00:18:43.691 15:36:56 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:43.691 15:36:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:43.691 15:36:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:43.691 15:36:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:43.948 ************************************ 00:18:43.948 START TEST nvmf_target_multipath 00:18:43.948 ************************************ 00:18:43.948 15:36:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:43.948 * Looking for test storage... 00:18:43.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:43.948 15:36:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:43.948 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:43.948 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.948 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.948 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.948 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.948 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.948 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.948 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.948 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.948 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.948 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.948 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:43.949 15:36:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:46.478 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:46.478 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:46.478 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:46.479 Found net devices under 0000:09:00.0: cvl_0_0 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:46.479 Found net devices under 0000:09:00.1: cvl_0_1 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:46.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:46.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:18:46.479 00:18:46.479 --- 10.0.0.2 ping statistics --- 00:18:46.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.479 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:46.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:46.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:18:46.479 00:18:46.479 --- 10.0.0.1 ping statistics --- 00:18:46.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.479 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:46.479 only one NIC for nvmf test 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:46.479 rmmod nvme_tcp 00:18:46.479 rmmod nvme_fabrics 00:18:46.479 rmmod nvme_keyring 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.479 15:36:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.026 15:37:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:49.026 15:37:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:49.026 15:37:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:49.026 15:37:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:49.026 15:37:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:49.026 15:37:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:49.026 15:37:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:49.026 15:37:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:49.026 15:37:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:49.026 15:37:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:49.026 15:37:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:49.026 15:37:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:49.026 15:37:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:49.026 15:37:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:49.026 15:37:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:49.026 15:37:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:49.026 15:37:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:49.026 15:37:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:49.026 15:37:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.026 15:37:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.026 15:37:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.026 15:37:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:49.026 00:18:49.026 real 0m4.736s 00:18:49.026 user 0m0.914s 00:18:49.026 sys 0m1.819s 00:18:49.026 15:37:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:49.026 15:37:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:49.026 ************************************ 00:18:49.026 END TEST nvmf_target_multipath 00:18:49.026 ************************************ 00:18:49.026 15:37:01 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:49.026 15:37:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:49.026 15:37:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:49.026 15:37:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:49.026 ************************************ 00:18:49.026 START TEST nvmf_zcopy 00:18:49.026 ************************************ 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:49.026 * Looking for test storage... 00:18:49.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:49.026 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:49.027 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:49.027 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.027 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.027 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.027 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:49.027 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:49.027 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:49.027 15:37:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:49.027 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:49.027 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.027 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:49.027 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:49.027 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:49.027 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.027 15:37:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.027 15:37:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.027 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:49.027 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:49.027 15:37:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:49.027 15:37:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.550 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:51.551 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:51.551 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:51.551 Found net devices under 0000:09:00.0: cvl_0_0 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:51.551 Found net devices under 0000:09:00.1: cvl_0_1 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:51.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:51.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:18:51.551 00:18:51.551 --- 10.0.0.2 ping statistics --- 00:18:51.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.551 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:51.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:51.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:18:51.551 00:18:51.551 --- 10.0.0.1 ping statistics --- 00:18:51.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.551 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1308597 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1308597 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 1308597 ']' 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:51.551 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.551 [2024-05-15 15:37:04.391540] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:18:51.551 [2024-05-15 15:37:04.391618] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.551 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.551 [2024-05-15 15:37:04.435503] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:51.551 [2024-05-15 15:37:04.472969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.551 [2024-05-15 15:37:04.565320] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.551 [2024-05-15 15:37:04.565373] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.551 [2024-05-15 15:37:04.565390] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.551 [2024-05-15 15:37:04.565404] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.551 [2024-05-15 15:37:04.565417] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.551 [2024-05-15 15:37:04.565454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.809 [2024-05-15 15:37:04.720254] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.809 [2024-05-15 15:37:04.736212] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:51.809 [2024-05-15 15:37:04.736529] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.809 malloc0 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:51.809 { 00:18:51.809 "params": { 00:18:51.809 "name": "Nvme$subsystem", 00:18:51.809 "trtype": "$TEST_TRANSPORT", 00:18:51.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:51.809 "adrfam": "ipv4", 00:18:51.809 "trsvcid": "$NVMF_PORT", 00:18:51.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:51.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:51.809 "hdgst": ${hdgst:-false}, 00:18:51.809 "ddgst": ${ddgst:-false} 00:18:51.809 }, 00:18:51.809 "method": "bdev_nvme_attach_controller" 00:18:51.809 } 00:18:51.809 EOF 00:18:51.809 )") 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:51.809 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:51.810 15:37:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:51.810 "params": { 00:18:51.810 "name": "Nvme1", 00:18:51.810 "trtype": "tcp", 00:18:51.810 "traddr": "10.0.0.2", 00:18:51.810 "adrfam": "ipv4", 00:18:51.810 "trsvcid": "4420", 00:18:51.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:51.810 "hdgst": false, 00:18:51.810 "ddgst": false 00:18:51.810 }, 00:18:51.810 "method": "bdev_nvme_attach_controller" 00:18:51.810 }' 00:18:51.810 [2024-05-15 15:37:04.817584] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:18:51.810 [2024-05-15 15:37:04.817666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1308739 ] 00:18:51.810 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.810 [2024-05-15 15:37:04.853440] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:51.810 [2024-05-15 15:37:04.891353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.067 [2024-05-15 15:37:04.988058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.325 Running I/O for 10 seconds... 00:19:02.288 00:19:02.288 Latency(us) 00:19:02.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.288 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:19:02.288 Verification LBA range: start 0x0 length 0x1000 00:19:02.288 Nvme1n1 : 10.02 5701.64 44.54 0.00 0.00 22387.43 3276.80 31068.92 00:19:02.288 =================================================================================================================== 00:19:02.288 Total : 5701.64 44.54 0.00 0.00 22387.43 3276.80 31068.92 00:19:02.547 15:37:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1309928 00:19:02.547 15:37:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:19:02.547 15:37:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:02.547 15:37:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:19:02.547 15:37:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:19:02.547 15:37:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:19:02.547 15:37:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:19:02.547 15:37:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:02.547 15:37:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:02.547 { 00:19:02.547 "params": { 00:19:02.547 "name": "Nvme$subsystem", 00:19:02.547 "trtype": "$TEST_TRANSPORT", 00:19:02.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:02.547 "adrfam": "ipv4", 00:19:02.547 "trsvcid": "$NVMF_PORT", 00:19:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:02.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:02.547 "hdgst": ${hdgst:-false}, 00:19:02.547 "ddgst": ${ddgst:-false} 00:19:02.547 }, 00:19:02.547 "method": "bdev_nvme_attach_controller" 00:19:02.547 } 00:19:02.547 EOF 00:19:02.547 )") 00:19:02.547 15:37:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:19:02.547 15:37:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:19:02.547 [2024-05-15 15:37:15.478230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.547 [2024-05-15 15:37:15.478290] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.547 15:37:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:19:02.547 15:37:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:02.547 "params": { 00:19:02.547 "name": "Nvme1", 00:19:02.547 "trtype": "tcp", 00:19:02.547 "traddr": "10.0.0.2", 00:19:02.547 "adrfam": "ipv4", 00:19:02.547 "trsvcid": "4420", 00:19:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.547 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:02.547 "hdgst": false, 00:19:02.547 "ddgst": false 00:19:02.547 }, 00:19:02.547 "method": "bdev_nvme_attach_controller" 00:19:02.547 }' 00:19:02.547 [2024-05-15 15:37:15.486160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.548 [2024-05-15 15:37:15.486188] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.548 [2024-05-15 15:37:15.494172] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.548 [2024-05-15 15:37:15.494196] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.548 [2024-05-15 15:37:15.502185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.548 [2024-05-15 15:37:15.502226] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.548 [2024-05-15 15:37:15.510204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.548 [2024-05-15 15:37:15.510248] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.548 [2024-05-15 15:37:15.513574] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:19:02.548 [2024-05-15 15:37:15.513649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1309928 ] 00:19:02.548 [2024-05-15 15:37:15.518254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.548 [2024-05-15 15:37:15.518290] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.548 [2024-05-15 15:37:15.526280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.548 [2024-05-15 15:37:15.526302] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.548 [2024-05-15 15:37:15.534298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.548 [2024-05-15 15:37:15.534319] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.548 [2024-05-15 15:37:15.542318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.548 [2024-05-15 15:37:15.542354] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.548 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.548 [2024-05-15 15:37:15.550324] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.548 [2024-05-15 15:37:15.550345] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.548 [2024-05-15 15:37:15.551424] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:02.548 [2024-05-15 15:37:15.558347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.548 [2024-05-15 15:37:15.558368] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.548 [2024-05-15 15:37:15.566371] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.548 [2024-05-15 15:37:15.566392] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.548 [2024-05-15 15:37:15.574391] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.548 [2024-05-15 15:37:15.574412] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.548 [2024-05-15 15:37:15.582413] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.548 [2024-05-15 15:37:15.582434] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.548 [2024-05-15 15:37:15.585439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.548 [2024-05-15 15:37:15.590448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.548 [2024-05-15 15:37:15.590472] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.548 [2024-05-15 15:37:15.598511] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.548 [2024-05-15 15:37:15.598553] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.548 [2024-05-15 15:37:15.606483] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.548 [2024-05-15 15:37:15.606523] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.548 [2024-05-15 15:37:15.614521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.548 [2024-05-15 15:37:15.614545] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.548 [2024-05-15 15:37:15.622547] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.548 [2024-05-15 15:37:15.622572] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.548 [2024-05-15 15:37:15.630575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.548 [2024-05-15 15:37:15.630610] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.548 [2024-05-15 15:37:15.638622] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.548 [2024-05-15 15:37:15.638661] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.548 [2024-05-15 15:37:15.646617] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.548 [2024-05-15 15:37:15.646642] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.654639] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.654665] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.662661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.662686] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.670680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.670705] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.675131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.807 [2024-05-15 15:37:15.678704] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.678729] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.686728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.686752] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.694782] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.694819] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.702803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.702844] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.710828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.710868] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.718850] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.718889] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.726875] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.726916] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.734897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.734936] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.742882] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.742908] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.750939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.750979] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.758968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.759007] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.766959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.766985] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.774969] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.774995] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.783008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.783039] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.791025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.791055] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.799048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.799076] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.807072] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.807101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.815091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.815117] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.823113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.823138] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.831135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.831160] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.839160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.839185] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.847188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.847222] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.855211] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.855250] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.863238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.863279] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.871256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.871292] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.879293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.879315] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.887310] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.887346] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.895332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.895354] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.807 [2024-05-15 15:37:15.903357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.807 [2024-05-15 15:37:15.903382] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.066 [2024-05-15 15:37:15.911367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.066 [2024-05-15 15:37:15.911391] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.066 [2024-05-15 15:37:15.919384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.066 [2024-05-15 15:37:15.919406] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.066 [2024-05-15 15:37:15.927404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.066 [2024-05-15 15:37:15.927427] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.066 [2024-05-15 15:37:15.935427] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.066 [2024-05-15 15:37:15.935449] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.066 [2024-05-15 15:37:15.943455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.066 [2024-05-15 15:37:15.943480] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.066 [2024-05-15 15:37:15.951472] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.067 [2024-05-15 15:37:15.951512] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.067 [2024-05-15 15:37:15.959510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.067 [2024-05-15 15:37:15.959536] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.067 [2024-05-15 15:37:15.967552] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.067 [2024-05-15 15:37:15.967578] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.067 [2024-05-15 15:37:15.975566] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.067 [2024-05-15 15:37:15.975591] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.067 [2024-05-15 15:37:15.983594] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.067 [2024-05-15 15:37:15.983619] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.067 [2024-05-15 15:37:15.991616] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.067 [2024-05-15 15:37:15.991643] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.067 [2024-05-15 15:37:15.999640] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.067 [2024-05-15 15:37:15.999665] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.067 [2024-05-15 15:37:16.007667] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.067 [2024-05-15 15:37:16.007697] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.067 Running I/O for 5 seconds... 00:19:03.067 [2024-05-15 15:37:16.015690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.067 [2024-05-15 15:37:16.015717] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.067 [2024-05-15 15:37:16.030075] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.067 [2024-05-15 15:37:16.030109] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.067 [2024-05-15 15:37:16.042057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.067 [2024-05-15 15:37:16.042090] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.067 [2024-05-15 15:37:16.054169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.067 [2024-05-15 15:37:16.054201] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.067 [2024-05-15 15:37:16.066170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.067 [2024-05-15 15:37:16.066201] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.067 [2024-05-15 15:37:16.077694] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.067 [2024-05-15 15:37:16.077725] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.067 [2024-05-15 15:37:16.089134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.067 [2024-05-15 15:37:16.089165] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.067 [2024-05-15 15:37:16.100845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.067 [2024-05-15 15:37:16.100876] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.067 [2024-05-15 15:37:16.112159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.067 [2024-05-15 15:37:16.112196] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.067 [2024-05-15 15:37:16.124127] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.067 [2024-05-15 15:37:16.124157] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.067 [2024-05-15 15:37:16.136327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.067 [2024-05-15 15:37:16.136356] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.067 [2024-05-15 15:37:16.148004] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.067 [2024-05-15 15:37:16.148035] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.067 [2024-05-15 15:37:16.159833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.067 [2024-05-15 15:37:16.159865] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.325 [2024-05-15 15:37:16.172984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.325 [2024-05-15 15:37:16.173015] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.325 [2024-05-15 15:37:16.183626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.325 [2024-05-15 15:37:16.183656] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.325 [2024-05-15 15:37:16.195310] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.325 [2024-05-15 15:37:16.195338] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.325 [2024-05-15 15:37:16.206656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.325 [2024-05-15 15:37:16.206687] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.325 [2024-05-15 15:37:16.217853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.325 [2024-05-15 15:37:16.217883] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.325 [2024-05-15 15:37:16.229488] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.325 [2024-05-15 15:37:16.229533] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.325 [2024-05-15 15:37:16.240605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.325 [2024-05-15 15:37:16.240636] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.325 [2024-05-15 15:37:16.252397] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.325 [2024-05-15 15:37:16.252424] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.325 [2024-05-15 15:37:16.263771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.325 [2024-05-15 15:37:16.263801] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.325 [2024-05-15 15:37:16.275598] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.325 [2024-05-15 15:37:16.275629] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.325 [2024-05-15 15:37:16.287555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.325 [2024-05-15 15:37:16.287585] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.325 [2024-05-15 15:37:16.299450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.325 [2024-05-15 15:37:16.299478] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.325 [2024-05-15 15:37:16.310987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.325 [2024-05-15 15:37:16.311017] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.325 [2024-05-15 15:37:16.323025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.325 [2024-05-15 15:37:16.323055] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.325 [2024-05-15 15:37:16.335044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.325 [2024-05-15 15:37:16.335083] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.325 [2024-05-15 15:37:16.346665] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.325 [2024-05-15 15:37:16.346695] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.325 [2024-05-15 15:37:16.358283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.325 [2024-05-15 15:37:16.358311] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.325 [2024-05-15 15:37:16.369944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.325 [2024-05-15 15:37:16.369975] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.325 [2024-05-15 15:37:16.381607] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.325 [2024-05-15 15:37:16.381638] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.325 [2024-05-15 15:37:16.393426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.325 [2024-05-15 15:37:16.393454] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.325 [2024-05-15 15:37:16.405167] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.325 [2024-05-15 15:37:16.405197] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.325 [2024-05-15 15:37:16.416619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.325 [2024-05-15 15:37:16.416650] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.583 [2024-05-15 15:37:16.430340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.583 [2024-05-15 15:37:16.430367] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.583 [2024-05-15 15:37:16.441423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.583 [2024-05-15 15:37:16.441452] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.583 [2024-05-15 15:37:16.453453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.583 [2024-05-15 15:37:16.453480] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.583 [2024-05-15 15:37:16.465323] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.583 [2024-05-15 15:37:16.465351] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.583 [2024-05-15 15:37:16.477044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.583 [2024-05-15 15:37:16.477075] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.583 [2024-05-15 15:37:16.488900] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.583 [2024-05-15 15:37:16.488932] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.583 [2024-05-15 15:37:16.500582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.583 [2024-05-15 15:37:16.500612] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.583 [2024-05-15 15:37:16.512044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.583 [2024-05-15 15:37:16.512074] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.583 [2024-05-15 15:37:16.523840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.583 [2024-05-15 15:37:16.523870] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.583 [2024-05-15 15:37:16.535657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.583 [2024-05-15 15:37:16.535688] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.583 [2024-05-15 15:37:16.549456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.583 [2024-05-15 15:37:16.549484] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.583 [2024-05-15 15:37:16.560422] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.583 [2024-05-15 15:37:16.560458] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.583 [2024-05-15 15:37:16.571778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.583 [2024-05-15 15:37:16.571808] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.583 [2024-05-15 15:37:16.583041] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.583 [2024-05-15 15:37:16.583071] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.583 [2024-05-15 15:37:16.594456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.583 [2024-05-15 15:37:16.594485] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.583 [2024-05-15 15:37:16.605801] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.583 [2024-05-15 15:37:16.605832] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.583 [2024-05-15 15:37:16.617277] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.583 [2024-05-15 15:37:16.617305] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.583 [2024-05-15 15:37:16.628985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.583 [2024-05-15 15:37:16.629015] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.583 [2024-05-15 15:37:16.640731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.583 [2024-05-15 15:37:16.640761] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.583 [2024-05-15 15:37:16.652656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.583 [2024-05-15 15:37:16.652686] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.583 [2024-05-15 15:37:16.664148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.583 [2024-05-15 15:37:16.664179] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.583 [2024-05-15 15:37:16.676341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.583 [2024-05-15 15:37:16.676368] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.841 [2024-05-15 15:37:16.688223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.841 [2024-05-15 15:37:16.688268] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.841 [2024-05-15 15:37:16.699801] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.841 [2024-05-15 15:37:16.699832] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.841 [2024-05-15 15:37:16.713001] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.841 [2024-05-15 15:37:16.713031] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.841 [2024-05-15 15:37:16.723537] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.841 [2024-05-15 15:37:16.723567] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.841 [2024-05-15 15:37:16.735506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.841 [2024-05-15 15:37:16.735550] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.841 [2024-05-15 15:37:16.746987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.841 [2024-05-15 15:37:16.747017] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.841 [2024-05-15 15:37:16.758393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.841 [2024-05-15 15:37:16.758421] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.841 [2024-05-15 15:37:16.770146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.841 [2024-05-15 15:37:16.770177] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.841 [2024-05-15 15:37:16.782003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.841 [2024-05-15 15:37:16.782048] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.841 [2024-05-15 15:37:16.793446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.841 [2024-05-15 15:37:16.793475] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.841 [2024-05-15 15:37:16.804722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.841 [2024-05-15 15:37:16.804753] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.841 [2024-05-15 15:37:16.816455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.841 [2024-05-15 15:37:16.816483] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.841 [2024-05-15 15:37:16.827853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.841 [2024-05-15 15:37:16.827884] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.841 [2024-05-15 15:37:16.839042] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.841 [2024-05-15 15:37:16.839072] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.841 [2024-05-15 15:37:16.850503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.841 [2024-05-15 15:37:16.850530] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.841 [2024-05-15 15:37:16.861855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.841 [2024-05-15 15:37:16.861886] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.841 [2024-05-15 15:37:16.873455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.842 [2024-05-15 15:37:16.873482] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.842 [2024-05-15 15:37:16.884548] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.842 [2024-05-15 15:37:16.884582] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.842 [2024-05-15 15:37:16.895846] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.842 [2024-05-15 15:37:16.895876] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.842 [2024-05-15 15:37:16.907302] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.842 [2024-05-15 15:37:16.907329] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.842 [2024-05-15 15:37:16.918661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.842 [2024-05-15 15:37:16.918691] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.842 [2024-05-15 15:37:16.929980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.842 [2024-05-15 15:37:16.930011] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.842 [2024-05-15 15:37:16.941404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.842 [2024-05-15 15:37:16.941432] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.100 [2024-05-15 15:37:16.953110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.100 [2024-05-15 15:37:16.953140] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.100 [2024-05-15 15:37:16.964792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.100 [2024-05-15 15:37:16.964822] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.100 [2024-05-15 15:37:16.976392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.100 [2024-05-15 15:37:16.976419] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.100 [2024-05-15 15:37:16.987934] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.100 [2024-05-15 15:37:16.987965] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.100 [2024-05-15 15:37:16.999308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.100 [2024-05-15 15:37:16.999335] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.100 [2024-05-15 15:37:17.012807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.100 [2024-05-15 15:37:17.012838] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.100 [2024-05-15 15:37:17.023300] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.100 [2024-05-15 15:37:17.023328] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.100 [2024-05-15 15:37:17.034575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.100 [2024-05-15 15:37:17.034606] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.100 [2024-05-15 15:37:17.046169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.100 [2024-05-15 15:37:17.046210] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.100 [2024-05-15 15:37:17.057995] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.100 [2024-05-15 15:37:17.058027] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.100 [2024-05-15 15:37:17.070026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.100 [2024-05-15 15:37:17.070058] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.100 [2024-05-15 15:37:17.081969] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.100 [2024-05-15 15:37:17.082001] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.100 [2024-05-15 15:37:17.094073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.100 [2024-05-15 15:37:17.094104] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.100 [2024-05-15 15:37:17.105950] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.100 [2024-05-15 15:37:17.105981] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.100 [2024-05-15 15:37:17.117197] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.100 [2024-05-15 15:37:17.117242] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.100 [2024-05-15 15:37:17.128860] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.100 [2024-05-15 15:37:17.128891] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.100 [2024-05-15 15:37:17.140487] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.100 [2024-05-15 15:37:17.140523] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.100 [2024-05-15 15:37:17.152521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.100 [2024-05-15 15:37:17.152554] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.100 [2024-05-15 15:37:17.163979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.100 [2024-05-15 15:37:17.164009] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.100 [2024-05-15 15:37:17.175118] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.100 [2024-05-15 15:37:17.175148] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.100 [2024-05-15 15:37:17.186413] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.100 [2024-05-15 15:37:17.186440] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.100 [2024-05-15 15:37:17.198146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.100 [2024-05-15 15:37:17.198176] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.359 [2024-05-15 15:37:17.209448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.359 [2024-05-15 15:37:17.209476] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.359 [2024-05-15 15:37:17.220664] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.359 [2024-05-15 15:37:17.220694] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.359 [2024-05-15 15:37:17.231807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.359 [2024-05-15 15:37:17.231837] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.359 [2024-05-15 15:37:17.242533] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.359 [2024-05-15 15:37:17.242577] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.359 [2024-05-15 15:37:17.253913] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.359 [2024-05-15 15:37:17.253944] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.359 [2024-05-15 15:37:17.265522] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.359 [2024-05-15 15:37:17.265567] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.359 [2024-05-15 15:37:17.277106] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.359 [2024-05-15 15:37:17.277137] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.359 [2024-05-15 15:37:17.290618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.359 [2024-05-15 15:37:17.290649] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.359 [2024-05-15 15:37:17.301719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.359 [2024-05-15 15:37:17.301749] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.359 [2024-05-15 15:37:17.314103] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.359 [2024-05-15 15:37:17.314133] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.359 [2024-05-15 15:37:17.326353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.359 [2024-05-15 15:37:17.326380] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.359 [2024-05-15 15:37:17.338317] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.359 [2024-05-15 15:37:17.338344] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.359 [2024-05-15 15:37:17.350271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.359 [2024-05-15 15:37:17.350299] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.359 [2024-05-15 15:37:17.362166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.359 [2024-05-15 15:37:17.362196] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.359 [2024-05-15 15:37:17.373839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.359 [2024-05-15 15:37:17.373869] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.359 [2024-05-15 15:37:17.385686] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.359 [2024-05-15 15:37:17.385716] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.359 [2024-05-15 15:37:17.397272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.359 [2024-05-15 15:37:17.397300] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.359 [2024-05-15 15:37:17.408994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.359 [2024-05-15 15:37:17.409024] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.359 [2024-05-15 15:37:17.420682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.359 [2024-05-15 15:37:17.420712] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.359 [2024-05-15 15:37:17.432575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.359 [2024-05-15 15:37:17.432605] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.359 [2024-05-15 15:37:17.444084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.359 [2024-05-15 15:37:17.444115] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.359 [2024-05-15 15:37:17.456335] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.359 [2024-05-15 15:37:17.456362] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.618 [2024-05-15 15:37:17.467696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.618 [2024-05-15 15:37:17.467726] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.618 [2024-05-15 15:37:17.479711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.618 [2024-05-15 15:37:17.479741] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.618 [2024-05-15 15:37:17.491134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.618 [2024-05-15 15:37:17.491165] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.618 [2024-05-15 15:37:17.503243] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.618 [2024-05-15 15:37:17.503293] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.618 [2024-05-15 15:37:17.515741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.618 [2024-05-15 15:37:17.515772] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.618 [2024-05-15 15:37:17.527534] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.618 [2024-05-15 15:37:17.527564] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.618 [2024-05-15 15:37:17.538987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.618 [2024-05-15 15:37:17.539018] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.618 [2024-05-15 15:37:17.550393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.618 [2024-05-15 15:37:17.550421] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.618 [2024-05-15 15:37:17.562068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.618 [2024-05-15 15:37:17.562098] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.618 [2024-05-15 15:37:17.573634] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.618 [2024-05-15 15:37:17.573665] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.618 [2024-05-15 15:37:17.586897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.618 [2024-05-15 15:37:17.586927] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.618 [2024-05-15 15:37:17.597454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.618 [2024-05-15 15:37:17.597481] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.618 [2024-05-15 15:37:17.609174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.618 [2024-05-15 15:37:17.609203] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.618 [2024-05-15 15:37:17.621162] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.618 [2024-05-15 15:37:17.621192] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.618 [2024-05-15 15:37:17.634516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.618 [2024-05-15 15:37:17.634562] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.618 [2024-05-15 15:37:17.645244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.618 [2024-05-15 15:37:17.645288] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.618 [2024-05-15 15:37:17.656835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.618 [2024-05-15 15:37:17.656874] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.618 [2024-05-15 15:37:17.668605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.619 [2024-05-15 15:37:17.668636] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.619 [2024-05-15 15:37:17.681866] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.619 [2024-05-15 15:37:17.681895] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.619 [2024-05-15 15:37:17.692648] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.619 [2024-05-15 15:37:17.692678] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.619 [2024-05-15 15:37:17.704630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.619 [2024-05-15 15:37:17.704662] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.619 [2024-05-15 15:37:17.716490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.619 [2024-05-15 15:37:17.716517] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.876 [2024-05-15 15:37:17.727984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.876 [2024-05-15 15:37:17.728014] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.876 [2024-05-15 15:37:17.739279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.876 [2024-05-15 15:37:17.739306] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.876 [2024-05-15 15:37:17.750699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.876 [2024-05-15 15:37:17.750729] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.876 [2024-05-15 15:37:17.762614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.876 [2024-05-15 15:37:17.762646] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.876 [2024-05-15 15:37:17.774526] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.876 [2024-05-15 15:37:17.774557] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.876 [2024-05-15 15:37:17.786149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.876 [2024-05-15 15:37:17.786180] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.876 [2024-05-15 15:37:17.797756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.876 [2024-05-15 15:37:17.797785] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.876 [2024-05-15 15:37:17.809354] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.876 [2024-05-15 15:37:17.809381] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.876 [2024-05-15 15:37:17.822919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.876 [2024-05-15 15:37:17.822949] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.876 [2024-05-15 15:37:17.833426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.876 [2024-05-15 15:37:17.833454] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.876 [2024-05-15 15:37:17.845288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.876 [2024-05-15 15:37:17.845315] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.877 [2024-05-15 15:37:17.856788] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.877 [2024-05-15 15:37:17.856819] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.877 [2024-05-15 15:37:17.870339] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.877 [2024-05-15 15:37:17.870367] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.877 [2024-05-15 15:37:17.880979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.877 [2024-05-15 15:37:17.881018] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.877 [2024-05-15 15:37:17.892607] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.877 [2024-05-15 15:37:17.892637] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.877 [2024-05-15 15:37:17.904314] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.877 [2024-05-15 15:37:17.904342] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.877 [2024-05-15 15:37:17.915952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.877 [2024-05-15 15:37:17.915982] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.877 [2024-05-15 15:37:17.927516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.877 [2024-05-15 15:37:17.927546] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.877 [2024-05-15 15:37:17.938982] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.877 [2024-05-15 15:37:17.939013] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.877 [2024-05-15 15:37:17.950579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.877 [2024-05-15 15:37:17.950609] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.877 [2024-05-15 15:37:17.961730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.877 [2024-05-15 15:37:17.961761] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.877 [2024-05-15 15:37:17.973253] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.877 [2024-05-15 15:37:17.973300] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.135 [2024-05-15 15:37:17.984284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.135 [2024-05-15 15:37:17.984312] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.135 [2024-05-15 15:37:17.995562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.135 [2024-05-15 15:37:17.995592] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.135 [2024-05-15 15:37:18.008879] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.135 [2024-05-15 15:37:18.008909] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.135 [2024-05-15 15:37:18.019748] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.135 [2024-05-15 15:37:18.019779] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.135 [2024-05-15 15:37:18.031045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.135 [2024-05-15 15:37:18.031076] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.135 [2024-05-15 15:37:18.042676] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.135 [2024-05-15 15:37:18.042707] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.135 [2024-05-15 15:37:18.054379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.135 [2024-05-15 15:37:18.054407] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.135 [2024-05-15 15:37:18.066162] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.135 [2024-05-15 15:37:18.066192] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.135 [2024-05-15 15:37:18.077768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.135 [2024-05-15 15:37:18.077798] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.135 [2024-05-15 15:37:18.089483] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.135 [2024-05-15 15:37:18.089528] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.135 [2024-05-15 15:37:18.101098] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.135 [2024-05-15 15:37:18.101137] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.135 [2024-05-15 15:37:18.112919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.135 [2024-05-15 15:37:18.112950] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.135 [2024-05-15 15:37:18.124725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.135 [2024-05-15 15:37:18.124755] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.135 [2024-05-15 15:37:18.135674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.135 [2024-05-15 15:37:18.135705] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.135 [2024-05-15 15:37:18.147624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.135 [2024-05-15 15:37:18.147655] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.135 [2024-05-15 15:37:18.158911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.135 [2024-05-15 15:37:18.158941] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.135 [2024-05-15 15:37:18.170628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.135 [2024-05-15 15:37:18.170674] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.135 [2024-05-15 15:37:18.181822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.135 [2024-05-15 15:37:18.181853] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.135 [2024-05-15 15:37:18.193234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.135 [2024-05-15 15:37:18.193278] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.135 [2024-05-15 15:37:18.204719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.135 [2024-05-15 15:37:18.204751] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.135 [2024-05-15 15:37:18.215762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.135 [2024-05-15 15:37:18.215793] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.135 [2024-05-15 15:37:18.227025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.135 [2024-05-15 15:37:18.227055] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.393 [2024-05-15 15:37:18.240579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.393 [2024-05-15 15:37:18.240611] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.393 [2024-05-15 15:37:18.251351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.393 [2024-05-15 15:37:18.251379] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.393 [2024-05-15 15:37:18.263122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.393 [2024-05-15 15:37:18.263153] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.393 [2024-05-15 15:37:18.275140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.393 [2024-05-15 15:37:18.275172] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.393 [2024-05-15 15:37:18.286043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.393 [2024-05-15 15:37:18.286074] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.393 [2024-05-15 15:37:18.297312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.393 [2024-05-15 15:37:18.297340] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.393 [2024-05-15 15:37:18.308858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.393 [2024-05-15 15:37:18.308888] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.393 [2024-05-15 15:37:18.320737] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.393 [2024-05-15 15:37:18.320778] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.393 [2024-05-15 15:37:18.332824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.393 [2024-05-15 15:37:18.332854] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.393 [2024-05-15 15:37:18.344395] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.393 [2024-05-15 15:37:18.344423] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.393 [2024-05-15 15:37:18.355692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.393 [2024-05-15 15:37:18.355720] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.393 [2024-05-15 15:37:18.367143] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.393 [2024-05-15 15:37:18.367174] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.393 [2024-05-15 15:37:18.378594] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.393 [2024-05-15 15:37:18.378625] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.393 [2024-05-15 15:37:18.390446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.393 [2024-05-15 15:37:18.390473] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.393 [2024-05-15 15:37:18.402273] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.393 [2024-05-15 15:37:18.402301] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.393 [2024-05-15 15:37:18.413943] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.393 [2024-05-15 15:37:18.413972] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.393 [2024-05-15 15:37:18.427821] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.393 [2024-05-15 15:37:18.427850] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.393 [2024-05-15 15:37:18.438838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.393 [2024-05-15 15:37:18.438868] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.393 [2024-05-15 15:37:18.450878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.393 [2024-05-15 15:37:18.450908] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.393 [2024-05-15 15:37:18.462469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.393 [2024-05-15 15:37:18.462496] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.393 [2024-05-15 15:37:18.474001] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.393 [2024-05-15 15:37:18.474031] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.393 [2024-05-15 15:37:18.487136] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.393 [2024-05-15 15:37:18.487165] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.652 [2024-05-15 15:37:18.498010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.652 [2024-05-15 15:37:18.498042] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.652 [2024-05-15 15:37:18.509908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.652 [2024-05-15 15:37:18.509939] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.652 [2024-05-15 15:37:18.521688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.652 [2024-05-15 15:37:18.521719] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.652 [2024-05-15 15:37:18.533341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.652 [2024-05-15 15:37:18.533368] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.652 [2024-05-15 15:37:18.545147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.652 [2024-05-15 15:37:18.545178] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.652 [2024-05-15 15:37:18.557210] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.652 [2024-05-15 15:37:18.557249] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.652 [2024-05-15 15:37:18.569365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.652 [2024-05-15 15:37:18.569393] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.652 [2024-05-15 15:37:18.581700] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.652 [2024-05-15 15:37:18.581731] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.652 [2024-05-15 15:37:18.593201] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.652 [2024-05-15 15:37:18.593240] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.652 [2024-05-15 15:37:18.605353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.652 [2024-05-15 15:37:18.605380] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.652 [2024-05-15 15:37:18.617698] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.652 [2024-05-15 15:37:18.617729] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.652 [2024-05-15 15:37:18.631040] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.652 [2024-05-15 15:37:18.631070] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.652 [2024-05-15 15:37:18.641173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.652 [2024-05-15 15:37:18.641203] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.652 [2024-05-15 15:37:18.653225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.652 [2024-05-15 15:37:18.653270] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.652 [2024-05-15 15:37:18.665566] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.652 [2024-05-15 15:37:18.665596] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.652 [2024-05-15 15:37:18.677309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.652 [2024-05-15 15:37:18.677336] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.652 [2024-05-15 15:37:18.690834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.652 [2024-05-15 15:37:18.690864] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.652 [2024-05-15 15:37:18.702177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.652 [2024-05-15 15:37:18.702207] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.652 [2024-05-15 15:37:18.713887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.652 [2024-05-15 15:37:18.713916] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.652 [2024-05-15 15:37:18.725710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.652 [2024-05-15 15:37:18.725740] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.652 [2024-05-15 15:37:18.739318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.652 [2024-05-15 15:37:18.739345] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.652 [2024-05-15 15:37:18.750159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.652 [2024-05-15 15:37:18.750190] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.911 [2024-05-15 15:37:18.761152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.911 [2024-05-15 15:37:18.761182] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.911 [2024-05-15 15:37:18.772524] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.911 [2024-05-15 15:37:18.772555] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.911 [2024-05-15 15:37:18.783998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.911 [2024-05-15 15:37:18.784029] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.911 [2024-05-15 15:37:18.795411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.911 [2024-05-15 15:37:18.795439] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.911 [2024-05-15 15:37:18.806810] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.911 [2024-05-15 15:37:18.806840] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.911 [2024-05-15 15:37:18.818026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.911 [2024-05-15 15:37:18.818056] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.911 [2024-05-15 15:37:18.831004] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.911 [2024-05-15 15:37:18.831035] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.911 [2024-05-15 15:37:18.842037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.911 [2024-05-15 15:37:18.842067] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.911 [2024-05-15 15:37:18.853523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.911 [2024-05-15 15:37:18.853567] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.911 [2024-05-15 15:37:18.864944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.911 [2024-05-15 15:37:18.864974] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.911 [2024-05-15 15:37:18.876873] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.911 [2024-05-15 15:37:18.876903] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.911 [2024-05-15 15:37:18.888728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.911 [2024-05-15 15:37:18.888758] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.911 [2024-05-15 15:37:18.900331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.911 [2024-05-15 15:37:18.900359] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.911 [2024-05-15 15:37:18.911574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.911 [2024-05-15 15:37:18.911604] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.911 [2024-05-15 15:37:18.922781] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.911 [2024-05-15 15:37:18.922811] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.911 [2024-05-15 15:37:18.934027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.911 [2024-05-15 15:37:18.934057] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.911 [2024-05-15 15:37:18.945887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.911 [2024-05-15 15:37:18.945917] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.911 [2024-05-15 15:37:18.957196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.911 [2024-05-15 15:37:18.957236] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.911 [2024-05-15 15:37:18.970389] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.911 [2024-05-15 15:37:18.970416] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.911 [2024-05-15 15:37:18.980251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.911 [2024-05-15 15:37:18.980295] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.911 [2024-05-15 15:37:18.992459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.911 [2024-05-15 15:37:18.992488] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.911 [2024-05-15 15:37:19.004016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.911 [2024-05-15 15:37:19.004047] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 15:37:19.015509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 15:37:19.015553] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 15:37:19.026855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 15:37:19.026886] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 15:37:19.038630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 15:37:19.038660] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 15:37:19.050236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 15:37:19.050282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 15:37:19.063664] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 15:37:19.063695] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 15:37:19.074807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 15:37:19.074838] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 15:37:19.086187] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 15:37:19.086225] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 15:37:19.097623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 15:37:19.097654] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 15:37:19.109031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 15:37:19.109062] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 15:37:19.120309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 15:37:19.120337] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 15:37:19.131542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 15:37:19.131573] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 15:37:19.143080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 15:37:19.143111] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 15:37:19.154477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 15:37:19.154504] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 15:37:19.166421] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 15:37:19.166448] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 15:37:19.178630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 15:37:19.178661] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 15:37:19.190025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 15:37:19.190057] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 15:37:19.201565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 15:37:19.201596] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 15:37:19.213231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 15:37:19.213275] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 15:37:19.224529] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 15:37:19.224559] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 15:37:19.235440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 15:37:19.235467] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 15:37:19.248500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 15:37:19.248544] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 15:37:19.259521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 15:37:19.259567] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 15:37:19.271321] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 15:37:19.271348] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 15:37:19.282817] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 15:37:19.282848] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 15:37:19.294853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 15:37:19.294884] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 15:37:19.306349] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 15:37:19.306377] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 15:37:19.318056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 15:37:19.318086] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 15:37:19.330566] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 15:37:19.330597] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 15:37:19.342088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 15:37:19.342118] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 15:37:19.353877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 15:37:19.353908] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 15:37:19.365355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 15:37:19.365383] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 15:37:19.378316] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 15:37:19.378344] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 15:37:19.388730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 15:37:19.388761] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 15:37:19.401134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 15:37:19.401166] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 15:37:19.412751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 15:37:19.412782] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 15:37:19.426157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 15:37:19.426198] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 15:37:19.437318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 15:37:19.437346] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 15:37:19.449228] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 15:37:19.449272] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 15:37:19.460872] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 15:37:19.460902] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 15:37:19.472537] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 15:37:19.472580] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 15:37:19.484193] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 15:37:19.484233] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 15:37:19.495579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 15:37:19.495609] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 15:37:19.506985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 15:37:19.507016] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 15:37:19.518588] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 15:37:19.518619] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 15:37:19.530122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 15:37:19.530152] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.687 [2024-05-15 15:37:19.541680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.687 [2024-05-15 15:37:19.541711] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.687 [2024-05-15 15:37:19.553321] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.687 [2024-05-15 15:37:19.553349] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.687 [2024-05-15 15:37:19.565094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.687 [2024-05-15 15:37:19.565124] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.687 [2024-05-15 15:37:19.576441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.687 [2024-05-15 15:37:19.576468] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.687 [2024-05-15 15:37:19.587963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.687 [2024-05-15 15:37:19.587993] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.687 [2024-05-15 15:37:19.601576] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.687 [2024-05-15 15:37:19.601606] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.687 [2024-05-15 15:37:19.612545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.687 [2024-05-15 15:37:19.612572] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.687 [2024-05-15 15:37:19.624422] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.687 [2024-05-15 15:37:19.624450] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.687 [2024-05-15 15:37:19.636048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.687 [2024-05-15 15:37:19.636079] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.687 [2024-05-15 15:37:19.647689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.687 [2024-05-15 15:37:19.647729] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.687 [2024-05-15 15:37:19.659541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.687 [2024-05-15 15:37:19.659572] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.687 [2024-05-15 15:37:19.671077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.687 [2024-05-15 15:37:19.671107] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.687 [2024-05-15 15:37:19.684382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.687 [2024-05-15 15:37:19.684410] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.687 [2024-05-15 15:37:19.694913] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.687 [2024-05-15 15:37:19.694944] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.687 [2024-05-15 15:37:19.707261] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.687 [2024-05-15 15:37:19.707289] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.687 [2024-05-15 15:37:19.718952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.687 [2024-05-15 15:37:19.718982] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.687 [2024-05-15 15:37:19.730974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.687 [2024-05-15 15:37:19.731005] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.687 [2024-05-15 15:37:19.742755] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.687 [2024-05-15 15:37:19.742784] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.687 [2024-05-15 15:37:19.755679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.687 [2024-05-15 15:37:19.755709] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.687 [2024-05-15 15:37:19.766312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.687 [2024-05-15 15:37:19.766339] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.687 [2024-05-15 15:37:19.777404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.687 [2024-05-15 15:37:19.777431] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.687 [2024-05-15 15:37:19.788978] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.687 [2024-05-15 15:37:19.789009] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.945 [2024-05-15 15:37:19.800194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.945 [2024-05-15 15:37:19.800234] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.945 [2024-05-15 15:37:19.811425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.945 [2024-05-15 15:37:19.811454] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.945 [2024-05-15 15:37:19.822822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.945 [2024-05-15 15:37:19.822853] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.945 [2024-05-15 15:37:19.834029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.946 [2024-05-15 15:37:19.834059] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.946 [2024-05-15 15:37:19.845622] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.946 [2024-05-15 15:37:19.845652] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.946 [2024-05-15 15:37:19.858798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.946 [2024-05-15 15:37:19.858829] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.946 [2024-05-15 15:37:19.869923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.946 [2024-05-15 15:37:19.869963] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.946 [2024-05-15 15:37:19.881476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.946 [2024-05-15 15:37:19.881518] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.946 [2024-05-15 15:37:19.895033] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.946 [2024-05-15 15:37:19.895063] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.946 [2024-05-15 15:37:19.905797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.946 [2024-05-15 15:37:19.905827] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.946 [2024-05-15 15:37:19.917741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.946 [2024-05-15 15:37:19.917771] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.946 [2024-05-15 15:37:19.929408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.946 [2024-05-15 15:37:19.929435] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.946 [2024-05-15 15:37:19.943038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.946 [2024-05-15 15:37:19.943069] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.946 [2024-05-15 15:37:19.954067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.946 [2024-05-15 15:37:19.954097] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.946 [2024-05-15 15:37:19.965361] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.946 [2024-05-15 15:37:19.965389] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.946 [2024-05-15 15:37:19.977155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.946 [2024-05-15 15:37:19.977182] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.946 [2024-05-15 15:37:19.989085] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.946 [2024-05-15 15:37:19.989115] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.946 [2024-05-15 15:37:20.001002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.946 [2024-05-15 15:37:20.001043] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.946 [2024-05-15 15:37:20.013076] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.946 [2024-05-15 15:37:20.013109] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.946 [2024-05-15 15:37:20.025206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.946 [2024-05-15 15:37:20.025265] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.946 [2024-05-15 15:37:20.037333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.946 [2024-05-15 15:37:20.037364] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-05-15 15:37:20.048797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-05-15 15:37:20.048830] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-05-15 15:37:20.060570] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-05-15 15:37:20.060602] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-05-15 15:37:20.072414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-05-15 15:37:20.072442] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-05-15 15:37:20.085307] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-05-15 15:37:20.085334] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-05-15 15:37:20.095165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-05-15 15:37:20.095203] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-05-15 15:37:20.105844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-05-15 15:37:20.105872] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-05-15 15:37:20.116173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-05-15 15:37:20.116201] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-05-15 15:37:20.126720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-05-15 15:37:20.126747] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-05-15 15:37:20.137555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-05-15 15:37:20.137582] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-05-15 15:37:20.149418] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-05-15 15:37:20.149445] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-05-15 15:37:20.159152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-05-15 15:37:20.159180] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-05-15 15:37:20.169942] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-05-15 15:37:20.169969] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-05-15 15:37:20.180745] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-05-15 15:37:20.180772] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-05-15 15:37:20.191465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-05-15 15:37:20.191492] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-05-15 15:37:20.204530] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-05-15 15:37:20.204558] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-05-15 15:37:20.214325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-05-15 15:37:20.214352] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-05-15 15:37:20.225813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-05-15 15:37:20.225840] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-05-15 15:37:20.236905] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-05-15 15:37:20.236933] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-05-15 15:37:20.250440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-05-15 15:37:20.250468] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-05-15 15:37:20.260185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-05-15 15:37:20.260212] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-05-15 15:37:20.270827] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-05-15 15:37:20.270855] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-05-15 15:37:20.280938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-05-15 15:37:20.280965] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-05-15 15:37:20.291035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-05-15 15:37:20.291063] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-05-15 15:37:20.301630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-05-15 15:37:20.301658] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.463 [2024-05-15 15:37:20.312092] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.463 [2024-05-15 15:37:20.312119] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.463 [2024-05-15 15:37:20.322527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.463 [2024-05-15 15:37:20.322554] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.463 [2024-05-15 15:37:20.333244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.463 [2024-05-15 15:37:20.333270] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.463 [2024-05-15 15:37:20.343956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.463 [2024-05-15 15:37:20.343984] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.463 [2024-05-15 15:37:20.356316] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.463 [2024-05-15 15:37:20.356343] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.463 [2024-05-15 15:37:20.366630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.463 [2024-05-15 15:37:20.366657] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.463 [2024-05-15 15:37:20.377501] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.463 [2024-05-15 15:37:20.377529] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.463 [2024-05-15 15:37:20.390287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.463 [2024-05-15 15:37:20.390314] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.463 [2024-05-15 15:37:20.400137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.463 [2024-05-15 15:37:20.400165] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.463 [2024-05-15 15:37:20.410473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.463 [2024-05-15 15:37:20.410500] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.463 [2024-05-15 15:37:20.421160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.463 [2024-05-15 15:37:20.421187] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.463 [2024-05-15 15:37:20.432088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.463 [2024-05-15 15:37:20.432116] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.463 [2024-05-15 15:37:20.443068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.463 [2024-05-15 15:37:20.443096] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.463 [2024-05-15 15:37:20.453783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.463 [2024-05-15 15:37:20.453812] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.463 [2024-05-15 15:37:20.466152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.463 [2024-05-15 15:37:20.466180] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.463 [2024-05-15 15:37:20.475731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.463 [2024-05-15 15:37:20.475759] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.463 [2024-05-15 15:37:20.487201] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.463 [2024-05-15 15:37:20.487240] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.463 [2024-05-15 15:37:20.503619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.463 [2024-05-15 15:37:20.503659] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.463 [2024-05-15 15:37:20.513904] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.463 [2024-05-15 15:37:20.513932] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.463 [2024-05-15 15:37:20.524513] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.463 [2024-05-15 15:37:20.524540] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.463 [2024-05-15 15:37:20.535543] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.463 [2024-05-15 15:37:20.535571] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.463 [2024-05-15 15:37:20.546561] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.463 [2024-05-15 15:37:20.546588] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.463 [2024-05-15 15:37:20.556910] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.463 [2024-05-15 15:37:20.556939] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.757 [2024-05-15 15:37:20.567147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.757 [2024-05-15 15:37:20.567175] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.757 [2024-05-15 15:37:20.577735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.757 [2024-05-15 15:37:20.577762] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.757 [2024-05-15 15:37:20.588453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.757 [2024-05-15 15:37:20.588481] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.757 [2024-05-15 15:37:20.599825] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.757 [2024-05-15 15:37:20.599853] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.757 [2024-05-15 15:37:20.612334] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.757 [2024-05-15 15:37:20.612362] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.757 [2024-05-15 15:37:20.622403] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.757 [2024-05-15 15:37:20.622430] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.757 [2024-05-15 15:37:20.633610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.757 [2024-05-15 15:37:20.633637] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.757 [2024-05-15 15:37:20.643898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.757 [2024-05-15 15:37:20.643925] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.757 [2024-05-15 15:37:20.654242] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.757 [2024-05-15 15:37:20.654270] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.757 [2024-05-15 15:37:20.665090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.757 [2024-05-15 15:37:20.665117] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.757 [2024-05-15 15:37:20.675764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-05-15 15:37:20.675791] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 [2024-05-15 15:37:20.686469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-05-15 15:37:20.686496] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 [2024-05-15 15:37:20.697702] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-05-15 15:37:20.697733] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 [2024-05-15 15:37:20.709336] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-05-15 15:37:20.709365] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 [2024-05-15 15:37:20.721282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-05-15 15:37:20.721310] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 [2024-05-15 15:37:20.732987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-05-15 15:37:20.733018] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 [2024-05-15 15:37:20.744800] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-05-15 15:37:20.744831] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 [2024-05-15 15:37:20.756398] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-05-15 15:37:20.756426] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 [2024-05-15 15:37:20.768022] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-05-15 15:37:20.768052] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 [2024-05-15 15:37:20.779392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-05-15 15:37:20.779419] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 [2024-05-15 15:37:20.791358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-05-15 15:37:20.791385] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 [2024-05-15 15:37:20.803075] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-05-15 15:37:20.803107] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 [2024-05-15 15:37:20.815275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-05-15 15:37:20.815303] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 [2024-05-15 15:37:20.826961] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-05-15 15:37:20.826992] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.015 [2024-05-15 15:37:20.839174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.015 [2024-05-15 15:37:20.839205] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.015 [2024-05-15 15:37:20.851089] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.015 [2024-05-15 15:37:20.851119] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.015 [2024-05-15 15:37:20.862854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.015 [2024-05-15 15:37:20.862885] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.015 [2024-05-15 15:37:20.874377] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.015 [2024-05-15 15:37:20.874404] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.015 [2024-05-15 15:37:20.887176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.015 [2024-05-15 15:37:20.887206] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.015 [2024-05-15 15:37:20.898181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.015 [2024-05-15 15:37:20.898211] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.015 [2024-05-15 15:37:20.909990] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.015 [2024-05-15 15:37:20.910021] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.015 [2024-05-15 15:37:20.921574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.015 [2024-05-15 15:37:20.921605] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.015 [2024-05-15 15:37:20.935277] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.015 [2024-05-15 15:37:20.935304] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.015 [2024-05-15 15:37:20.945740] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.015 [2024-05-15 15:37:20.945770] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.015 [2024-05-15 15:37:20.957597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.015 [2024-05-15 15:37:20.957627] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.015 [2024-05-15 15:37:20.968943] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.015 [2024-05-15 15:37:20.968973] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.015 [2024-05-15 15:37:20.980687] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.015 [2024-05-15 15:37:20.980718] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.016 [2024-05-15 15:37:20.992148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.016 [2024-05-15 15:37:20.992178] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.016 [2024-05-15 15:37:21.004002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.016 [2024-05-15 15:37:21.004032] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.016 [2024-05-15 15:37:21.015964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.016 [2024-05-15 15:37:21.015995] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.016 [2024-05-15 15:37:21.027392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.016 [2024-05-15 15:37:21.027420] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.016 [2024-05-15 15:37:21.038304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.016 [2024-05-15 15:37:21.038332] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.016 00:19:08.016 Latency(us) 00:19:08.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.016 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:19:08.016 Nvme1n1 : 5.01 11075.35 86.53 0.00 0.00 11540.24 4951.61 22330.79 00:19:08.016 =================================================================================================================== 00:19:08.016 Total : 11075.35 86.53 0.00 0.00 11540.24 4951.61 22330.79 00:19:08.016 [2024-05-15 15:37:21.042924] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.016 [2024-05-15 15:37:21.042952] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.016 [2024-05-15 15:37:21.051032] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.016 [2024-05-15 15:37:21.051068] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.016 [2024-05-15 15:37:21.058981] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.016 [2024-05-15 15:37:21.059016] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.016 [2024-05-15 15:37:21.067046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.016 [2024-05-15 15:37:21.067101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.016 [2024-05-15 15:37:21.075054] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.016 [2024-05-15 15:37:21.075102] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.016 [2024-05-15 15:37:21.083068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.016 [2024-05-15 15:37:21.083117] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.016 [2024-05-15 15:37:21.091090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.016 [2024-05-15 15:37:21.091156] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.016 [2024-05-15 15:37:21.099120] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.016 [2024-05-15 15:37:21.099167] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.016 [2024-05-15 15:37:21.107149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.016 [2024-05-15 15:37:21.107199] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.016 [2024-05-15 15:37:21.115162] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.016 [2024-05-15 15:37:21.115209] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.273 [2024-05-15 15:37:21.123194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.273 [2024-05-15 15:37:21.123267] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.273 [2024-05-15 15:37:21.131227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.273 [2024-05-15 15:37:21.131291] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.273 [2024-05-15 15:37:21.139266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.273 [2024-05-15 15:37:21.139320] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.273 [2024-05-15 15:37:21.147298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.273 [2024-05-15 15:37:21.147350] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.273 [2024-05-15 15:37:21.155304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.273 [2024-05-15 15:37:21.155355] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.273 [2024-05-15 15:37:21.163330] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.273 [2024-05-15 15:37:21.163382] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.273 [2024-05-15 15:37:21.171352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.273 [2024-05-15 15:37:21.171403] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.273 [2024-05-15 15:37:21.179358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.273 [2024-05-15 15:37:21.179409] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.273 [2024-05-15 15:37:21.187349] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.273 [2024-05-15 15:37:21.187381] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.273 [2024-05-15 15:37:21.195349] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.273 [2024-05-15 15:37:21.195373] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.273 [2024-05-15 15:37:21.203419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.273 [2024-05-15 15:37:21.203465] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.273 [2024-05-15 15:37:21.211437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.273 [2024-05-15 15:37:21.211482] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.273 [2024-05-15 15:37:21.219476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.273 [2024-05-15 15:37:21.219524] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.273 [2024-05-15 15:37:21.227441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.273 [2024-05-15 15:37:21.227465] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.273 [2024-05-15 15:37:21.235471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.273 [2024-05-15 15:37:21.235501] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.273 [2024-05-15 15:37:21.243546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.273 [2024-05-15 15:37:21.243607] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.273 [2024-05-15 15:37:21.251553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.273 [2024-05-15 15:37:21.251599] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.273 [2024-05-15 15:37:21.259563] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.273 [2024-05-15 15:37:21.259592] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.273 [2024-05-15 15:37:21.267574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.273 [2024-05-15 15:37:21.267599] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.273 [2024-05-15 15:37:21.275590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.273 [2024-05-15 15:37:21.275615] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1309928) - No such process 00:19:08.273 15:37:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1309928 00:19:08.273 15:37:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:08.273 15:37:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.273 15:37:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:08.273 15:37:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.273 15:37:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:08.273 15:37:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.273 15:37:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:08.273 delay0 00:19:08.273 15:37:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.273 15:37:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:19:08.273 15:37:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.273 15:37:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:08.273 15:37:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.273 15:37:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:19:08.273 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.531 [2024-05-15 15:37:21.395363] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:16.651 [2024-05-15 15:37:28.474998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75fee0 is same with the state(5) to be set 00:19:16.651 Initializing NVMe Controllers 00:19:16.651 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:16.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:16.652 Initialization complete. Launching workers. 00:19:16.652 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 264, failed: 13466 00:19:16.652 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 13634, failed to submit 96 00:19:16.652 success 13529, unsuccess 105, failed 0 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:16.652 rmmod nvme_tcp 00:19:16.652 rmmod nvme_fabrics 00:19:16.652 rmmod nvme_keyring 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1308597 ']' 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1308597 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 1308597 ']' 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 1308597 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1308597 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1308597' 00:19:16.652 killing process with pid 1308597 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 1308597 00:19:16.652 [2024-05-15 15:37:28.570481] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 1308597 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.652 15:37:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.024 15:37:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:18.024 00:19:18.024 real 0m29.251s 00:19:18.024 user 0m41.324s 00:19:18.024 sys 0m9.955s 00:19:18.024 15:37:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:18.024 15:37:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:18.024 ************************************ 00:19:18.024 END TEST nvmf_zcopy 00:19:18.024 ************************************ 00:19:18.024 15:37:30 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:18.024 15:37:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:18.025 15:37:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:18.025 15:37:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:18.025 ************************************ 00:19:18.025 START TEST nvmf_nmic 00:19:18.025 ************************************ 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:18.025 * Looking for test storage... 00:19:18.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:19:18.025 15:37:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:20.554 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:20.554 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:20.554 Found net devices under 0000:09:00.0: cvl_0_0 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:20.554 Found net devices under 0000:09:00.1: cvl_0_1 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:20.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:19:20.554 00:19:20.554 --- 10.0.0.2 ping statistics --- 00:19:20.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.554 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:19:20.554 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:20.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:19:20.812 00:19:20.812 --- 10.0.0.1 ping statistics --- 00:19:20.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.812 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:19:20.812 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.812 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:19:20.812 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:20.812 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.812 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:20.812 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:20.812 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.812 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:20.812 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:20.812 15:37:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:20.812 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:20.812 15:37:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:20.812 15:37:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:20.812 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1313729 00:19:20.812 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:20.812 15:37:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1313729 00:19:20.812 15:37:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 1313729 ']' 00:19:20.812 15:37:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.812 15:37:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:20.812 15:37:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.812 15:37:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:20.812 15:37:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:20.812 [2024-05-15 15:37:33.721042] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:19:20.812 [2024-05-15 15:37:33.721117] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.812 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.812 [2024-05-15 15:37:33.768147] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:20.812 [2024-05-15 15:37:33.805350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:20.812 [2024-05-15 15:37:33.899263] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.812 [2024-05-15 15:37:33.899319] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.812 [2024-05-15 15:37:33.899335] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.812 [2024-05-15 15:37:33.899349] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.812 [2024-05-15 15:37:33.899360] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.812 [2024-05-15 15:37:33.899418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.812 [2024-05-15 15:37:33.899471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.812 [2024-05-15 15:37:33.899503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:20.812 [2024-05-15 15:37:33.899506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:21.071 [2024-05-15 15:37:34.057966] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:21.071 Malloc0 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:21.071 [2024-05-15 15:37:34.110747] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:21.071 [2024-05-15 15:37:34.111070] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:21.071 test case1: single bdev can't be used in multiple subsystems 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:21.071 [2024-05-15 15:37:34.134858] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:21.071 [2024-05-15 15:37:34.134887] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:21.071 [2024-05-15 15:37:34.134917] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.071 request: 00:19:21.071 { 00:19:21.071 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:21.071 "namespace": { 00:19:21.071 "bdev_name": "Malloc0", 00:19:21.071 "no_auto_visible": false 00:19:21.071 }, 00:19:21.071 "method": "nvmf_subsystem_add_ns", 00:19:21.071 "req_id": 1 00:19:21.071 } 00:19:21.071 Got JSON-RPC error response 00:19:21.071 response: 00:19:21.071 { 00:19:21.071 "code": -32602, 00:19:21.071 "message": "Invalid parameters" 00:19:21.071 } 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:21.071 Adding namespace failed - expected result. 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:21.071 test case2: host connect to nvmf target in multiple paths 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:21.071 [2024-05-15 15:37:34.142976] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.071 15:37:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:21.635 15:37:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:22.565 15:37:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:22.565 15:37:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:19:22.565 15:37:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:19:22.565 15:37:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:19:22.565 15:37:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:19:24.463 15:37:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:19:24.463 15:37:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:19:24.463 15:37:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:19:24.463 15:37:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:19:24.463 15:37:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:19:24.463 15:37:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:19:24.463 15:37:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:24.463 [global] 00:19:24.463 thread=1 00:19:24.463 invalidate=1 00:19:24.463 rw=write 00:19:24.463 time_based=1 00:19:24.463 runtime=1 00:19:24.463 ioengine=libaio 00:19:24.464 direct=1 00:19:24.464 bs=4096 00:19:24.464 iodepth=1 00:19:24.464 norandommap=0 00:19:24.464 numjobs=1 00:19:24.464 00:19:24.464 verify_dump=1 00:19:24.464 verify_backlog=512 00:19:24.464 verify_state_save=0 00:19:24.464 do_verify=1 00:19:24.464 verify=crc32c-intel 00:19:24.464 [job0] 00:19:24.464 filename=/dev/nvme0n1 00:19:24.464 Could not set queue depth (nvme0n1) 00:19:24.721 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:24.721 fio-3.35 00:19:24.721 Starting 1 thread 00:19:25.652 00:19:25.652 job0: (groupid=0, jobs=1): err= 0: pid=1314289: Wed May 15 15:37:38 2024 00:19:25.652 read: IOPS=20, BW=82.3KiB/s (84.2kB/s)(84.0KiB/1021msec) 00:19:25.652 slat (nsec): min=15769, max=34942, avg=19731.90, stdev=7335.32 00:19:25.652 clat (usec): min=40840, max=42073, avg=41296.40, stdev=495.77 00:19:25.652 lat (usec): min=40857, max=42088, avg=41316.13, stdev=497.05 00:19:25.652 clat percentiles (usec): 00:19:25.652 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:25.652 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:25.652 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:25.652 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:25.652 | 99.99th=[42206] 00:19:25.652 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:19:25.652 slat (usec): min=8, max=30443, avg=80.32, stdev=1344.53 00:19:25.652 clat (usec): min=160, max=438, avg=213.89, stdev=32.99 00:19:25.652 lat (usec): min=170, max=30664, avg=294.21, stdev=1345.28 00:19:25.652 clat percentiles (usec): 00:19:25.652 | 1.00th=[ 172], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 198], 00:19:25.652 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 208], 60.00th=[ 212], 00:19:25.652 | 70.00th=[ 217], 80.00th=[ 221], 90.00th=[ 235], 95.00th=[ 251], 00:19:25.652 | 99.00th=[ 408], 99.50th=[ 437], 99.90th=[ 441], 99.95th=[ 441], 00:19:25.652 | 99.99th=[ 441] 00:19:25.652 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:25.652 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:25.652 lat (usec) : 250=90.99%, 500=5.07% 00:19:25.652 lat (msec) : 50=3.94% 00:19:25.652 cpu : usr=0.59%, sys=1.47%, ctx=536, majf=0, minf=2 00:19:25.652 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:25.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.652 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.652 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:25.652 00:19:25.652 Run status group 0 (all jobs): 00:19:25.652 READ: bw=82.3KiB/s (84.2kB/s), 82.3KiB/s-82.3KiB/s (84.2kB/s-84.2kB/s), io=84.0KiB (86.0kB), run=1021-1021msec 00:19:25.652 WRITE: bw=2006KiB/s (2054kB/s), 2006KiB/s-2006KiB/s (2054kB/s-2054kB/s), io=2048KiB (2097kB), run=1021-1021msec 00:19:25.652 00:19:25.652 Disk stats (read/write): 00:19:25.652 nvme0n1: ios=43/512, merge=0/0, ticks=1697/95, in_queue=1792, util=98.60% 00:19:25.652 15:37:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:25.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:25.909 rmmod nvme_tcp 00:19:25.909 rmmod nvme_fabrics 00:19:25.909 rmmod nvme_keyring 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1313729 ']' 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1313729 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 1313729 ']' 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 1313729 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1313729 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1313729' 00:19:25.909 killing process with pid 1313729 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 1313729 00:19:25.909 [2024-05-15 15:37:38.947755] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:25.909 15:37:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 1313729 00:19:26.166 15:37:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:26.166 15:37:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:26.166 15:37:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:26.166 15:37:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:26.166 15:37:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:26.166 15:37:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.166 15:37:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:26.166 15:37:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.694 15:37:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:28.694 00:19:28.694 real 0m10.372s 00:19:28.694 user 0m22.242s 00:19:28.694 sys 0m2.668s 00:19:28.694 15:37:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:28.694 15:37:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:28.694 ************************************ 00:19:28.694 END TEST nvmf_nmic 00:19:28.694 ************************************ 00:19:28.694 15:37:41 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:28.694 15:37:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:28.694 15:37:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:28.694 15:37:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:28.694 ************************************ 00:19:28.694 START TEST nvmf_fio_target 00:19:28.694 ************************************ 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:28.694 * Looking for test storage... 00:19:28.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:28.694 15:37:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:28.695 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:28.695 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.695 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:28.695 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:28.695 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:28.695 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.695 15:37:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:28.695 15:37:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.695 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:28.695 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:28.695 15:37:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:28.695 15:37:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:31.221 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:31.221 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:31.221 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:31.222 Found net devices under 0000:09:00.0: cvl_0_0 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:31.222 Found net devices under 0000:09:00.1: cvl_0_1 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:31.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:19:31.222 00:19:31.222 --- 10.0.0.2 ping statistics --- 00:19:31.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.222 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:31.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:19:31.222 00:19:31.222 --- 10.0.0.1 ping statistics --- 00:19:31.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.222 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:31.222 15:37:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.222 15:37:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:31.222 15:37:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:31.222 15:37:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:31.222 15:37:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:31.222 15:37:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:31.222 15:37:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.222 15:37:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1316725 00:19:31.222 15:37:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:31.222 15:37:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1316725 00:19:31.222 15:37:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 1316725 ']' 00:19:31.222 15:37:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.222 15:37:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:31.222 15:37:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.222 15:37:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:31.222 15:37:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.222 [2024-05-15 15:37:44.070228] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:19:31.222 [2024-05-15 15:37:44.070320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.222 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.222 [2024-05-15 15:37:44.116304] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:31.222 [2024-05-15 15:37:44.154062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:31.222 [2024-05-15 15:37:44.246527] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.222 [2024-05-15 15:37:44.246582] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.222 [2024-05-15 15:37:44.246607] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.222 [2024-05-15 15:37:44.246621] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.222 [2024-05-15 15:37:44.246633] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.222 [2024-05-15 15:37:44.246714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.222 [2024-05-15 15:37:44.246767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.222 [2024-05-15 15:37:44.246818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:31.222 [2024-05-15 15:37:44.246821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.182 15:37:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:32.182 15:37:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:19:32.182 15:37:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:32.182 15:37:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:32.182 15:37:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.182 15:37:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.182 15:37:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:32.182 [2024-05-15 15:37:45.239786] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.182 15:37:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:32.464 15:37:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:32.464 15:37:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:32.732 15:37:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:32.732 15:37:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:32.989 15:37:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:32.989 15:37:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:33.245 15:37:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:33.502 15:37:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:33.502 15:37:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:34.066 15:37:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:34.066 15:37:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:34.066 15:37:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:34.066 15:37:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:34.323 15:37:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:34.323 15:37:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:34.580 15:37:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:34.837 15:37:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:34.837 15:37:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:35.094 15:37:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:35.094 15:37:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:35.351 15:37:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:35.608 [2024-05-15 15:37:48.585962] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:35.608 [2024-05-15 15:37:48.586313] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.608 15:37:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:35.865 15:37:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:36.122 15:37:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:36.687 15:37:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:36.687 15:37:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:19:36.687 15:37:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:19:36.687 15:37:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:19:36.687 15:37:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:19:36.687 15:37:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:19:39.214 15:37:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:19:39.214 15:37:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:19:39.214 15:37:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:19:39.214 15:37:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:19:39.214 15:37:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:19:39.214 15:37:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:19:39.214 15:37:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:39.214 [global] 00:19:39.214 thread=1 00:19:39.214 invalidate=1 00:19:39.214 rw=write 00:19:39.214 time_based=1 00:19:39.214 runtime=1 00:19:39.214 ioengine=libaio 00:19:39.214 direct=1 00:19:39.214 bs=4096 00:19:39.214 iodepth=1 00:19:39.214 norandommap=0 00:19:39.214 numjobs=1 00:19:39.214 00:19:39.214 verify_dump=1 00:19:39.214 verify_backlog=512 00:19:39.214 verify_state_save=0 00:19:39.214 do_verify=1 00:19:39.214 verify=crc32c-intel 00:19:39.214 [job0] 00:19:39.214 filename=/dev/nvme0n1 00:19:39.214 [job1] 00:19:39.214 filename=/dev/nvme0n2 00:19:39.214 [job2] 00:19:39.214 filename=/dev/nvme0n3 00:19:39.214 [job3] 00:19:39.214 filename=/dev/nvme0n4 00:19:39.214 Could not set queue depth (nvme0n1) 00:19:39.214 Could not set queue depth (nvme0n2) 00:19:39.214 Could not set queue depth (nvme0n3) 00:19:39.214 Could not set queue depth (nvme0n4) 00:19:39.214 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:39.214 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:39.214 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:39.214 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:39.214 fio-3.35 00:19:39.214 Starting 4 threads 00:19:40.149 00:19:40.149 job0: (groupid=0, jobs=1): err= 0: pid=1317804: Wed May 15 15:37:53 2024 00:19:40.149 read: IOPS=20, BW=83.7KiB/s (85.7kB/s)(84.0KiB/1004msec) 00:19:40.149 slat (nsec): min=8066, max=32533, avg=14555.86, stdev=6008.67 00:19:40.149 clat (usec): min=40918, max=41389, avg=41002.39, stdev=94.02 00:19:40.149 lat (usec): min=40951, max=41397, avg=41016.95, stdev=91.90 00:19:40.149 clat percentiles (usec): 00:19:40.149 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:40.149 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:40.149 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:40.149 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:40.149 | 99.99th=[41157] 00:19:40.149 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:19:40.149 slat (nsec): min=8743, max=55186, avg=14427.47, stdev=6417.89 00:19:40.149 clat (usec): min=186, max=865, avg=259.95, stdev=53.20 00:19:40.149 lat (usec): min=196, max=904, avg=274.38, stdev=55.50 00:19:40.149 clat percentiles (usec): 00:19:40.149 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 219], 00:19:40.149 | 30.00th=[ 229], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 265], 00:19:40.149 | 70.00th=[ 277], 80.00th=[ 293], 90.00th=[ 318], 95.00th=[ 347], 00:19:40.149 | 99.00th=[ 416], 99.50th=[ 465], 99.90th=[ 865], 99.95th=[ 865], 00:19:40.149 | 99.99th=[ 865] 00:19:40.149 bw ( KiB/s): min= 4096, max= 4096, per=28.89%, avg=4096.00, stdev= 0.00, samples=1 00:19:40.149 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:40.149 lat (usec) : 250=48.78%, 500=46.90%, 750=0.19%, 1000=0.19% 00:19:40.149 lat (msec) : 50=3.94% 00:19:40.149 cpu : usr=0.40%, sys=1.00%, ctx=533, majf=0, minf=1 00:19:40.149 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:40.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.149 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.149 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.149 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:40.149 job1: (groupid=0, jobs=1): err= 0: pid=1317805: Wed May 15 15:37:53 2024 00:19:40.149 read: IOPS=22, BW=91.0KiB/s (93.2kB/s)(92.0KiB/1011msec) 00:19:40.149 slat (nsec): min=8982, max=31214, avg=14124.83, stdev=4332.66 00:19:40.149 clat (usec): min=324, max=41082, avg=39179.96, stdev=8471.12 00:19:40.149 lat (usec): min=337, max=41094, avg=39194.08, stdev=8471.45 00:19:40.149 clat percentiles (usec): 00:19:40.149 | 1.00th=[ 326], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:40.149 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:40.149 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:40.149 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:40.149 | 99.99th=[41157] 00:19:40.149 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:19:40.149 slat (nsec): min=7280, max=41109, avg=10942.46, stdev=4922.20 00:19:40.149 clat (usec): min=166, max=474, avg=199.80, stdev=19.91 00:19:40.149 lat (usec): min=175, max=486, avg=210.74, stdev=20.69 00:19:40.149 clat percentiles (usec): 00:19:40.149 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 188], 00:19:40.149 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:19:40.149 | 70.00th=[ 204], 80.00th=[ 208], 90.00th=[ 217], 95.00th=[ 225], 00:19:40.149 | 99.00th=[ 247], 99.50th=[ 306], 99.90th=[ 474], 99.95th=[ 474], 00:19:40.149 | 99.99th=[ 474] 00:19:40.149 bw ( KiB/s): min= 4096, max= 4096, per=28.89%, avg=4096.00, stdev= 0.00, samples=1 00:19:40.149 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:40.149 lat (usec) : 250=94.95%, 500=0.93% 00:19:40.149 lat (msec) : 50=4.11% 00:19:40.149 cpu : usr=0.40%, sys=0.40%, ctx=535, majf=0, minf=1 00:19:40.149 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:40.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.149 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.149 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.149 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:40.150 job2: (groupid=0, jobs=1): err= 0: pid=1317806: Wed May 15 15:37:53 2024 00:19:40.150 read: IOPS=1678, BW=6713KiB/s (6874kB/s)(6720KiB/1001msec) 00:19:40.150 slat (nsec): min=4462, max=70317, avg=13594.60, stdev=8745.73 00:19:40.150 clat (usec): min=248, max=718, avg=313.92, stdev=51.92 00:19:40.150 lat (usec): min=256, max=752, avg=327.52, stdev=54.42 00:19:40.150 clat percentiles (usec): 00:19:40.150 | 1.00th=[ 253], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 269], 00:19:40.150 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 314], 00:19:40.150 | 70.00th=[ 338], 80.00th=[ 371], 90.00th=[ 379], 95.00th=[ 392], 00:19:40.150 | 99.00th=[ 465], 99.50th=[ 523], 99.90th=[ 660], 99.95th=[ 717], 00:19:40.150 | 99.99th=[ 717] 00:19:40.150 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:19:40.150 slat (nsec): min=5948, max=63246, avg=9895.12, stdev=6305.77 00:19:40.150 clat (usec): min=168, max=512, avg=203.80, stdev=44.40 00:19:40.150 lat (usec): min=175, max=560, avg=213.70, stdev=48.20 00:19:40.150 clat percentiles (usec): 00:19:40.150 | 1.00th=[ 174], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 180], 00:19:40.150 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 196], 00:19:40.150 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 235], 95.00th=[ 289], 00:19:40.150 | 99.00th=[ 420], 99.50th=[ 441], 99.90th=[ 482], 99.95th=[ 490], 00:19:40.150 | 99.99th=[ 515] 00:19:40.150 bw ( KiB/s): min= 8192, max= 8192, per=57.77%, avg=8192.00, stdev= 0.00, samples=1 00:19:40.150 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:40.150 lat (usec) : 250=50.56%, 500=49.14%, 750=0.30% 00:19:40.150 cpu : usr=2.70%, sys=4.10%, ctx=3730, majf=0, minf=2 00:19:40.150 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:40.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.150 issued rwts: total=1680,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.150 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:40.150 job3: (groupid=0, jobs=1): err= 0: pid=1317807: Wed May 15 15:37:53 2024 00:19:40.150 read: IOPS=23, BW=95.7KiB/s (98.0kB/s)(96.0KiB/1003msec) 00:19:40.150 slat (nsec): min=8663, max=18046, avg=14127.54, stdev=1845.07 00:19:40.150 clat (usec): min=334, max=41067, avg=35892.21, stdev=13725.71 00:19:40.150 lat (usec): min=349, max=41080, avg=35906.34, stdev=13726.19 00:19:40.150 clat percentiles (usec): 00:19:40.150 | 1.00th=[ 334], 5.00th=[ 338], 10.00th=[ 355], 20.00th=[41157], 00:19:40.150 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:40.150 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:40.150 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:40.150 | 99.99th=[41157] 00:19:40.150 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:19:40.150 slat (nsec): min=7303, max=55552, avg=14969.54, stdev=9454.59 00:19:40.150 clat (usec): min=168, max=553, avg=256.94, stdev=81.37 00:19:40.150 lat (usec): min=177, max=593, avg=271.91, stdev=87.22 00:19:40.150 clat percentiles (usec): 00:19:40.150 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 200], 00:19:40.150 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 223], 60.00th=[ 237], 00:19:40.150 | 70.00th=[ 273], 80.00th=[ 314], 90.00th=[ 388], 95.00th=[ 449], 00:19:40.150 | 99.00th=[ 494], 99.50th=[ 510], 99.90th=[ 553], 99.95th=[ 553], 00:19:40.150 | 99.99th=[ 553] 00:19:40.150 bw ( KiB/s): min= 4096, max= 4096, per=28.89%, avg=4096.00, stdev= 0.00, samples=1 00:19:40.150 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:40.150 lat (usec) : 250=63.43%, 500=31.90%, 750=0.75% 00:19:40.150 lat (msec) : 50=3.92% 00:19:40.150 cpu : usr=0.40%, sys=0.70%, ctx=537, majf=0, minf=1 00:19:40.150 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:40.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.150 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.150 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:40.150 00:19:40.150 Run status group 0 (all jobs): 00:19:40.150 READ: bw=6916KiB/s (7082kB/s), 83.7KiB/s-6713KiB/s (85.7kB/s-6874kB/s), io=6992KiB (7160kB), run=1001-1011msec 00:19:40.150 WRITE: bw=13.8MiB/s (14.5MB/s), 2026KiB/s-8184KiB/s (2074kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1011msec 00:19:40.150 00:19:40.150 Disk stats (read/write): 00:19:40.150 nvme0n1: ios=67/512, merge=0/0, ticks=736/130, in_queue=866, util=86.97% 00:19:40.150 nvme0n2: ios=43/512, merge=0/0, ticks=750/98, in_queue=848, util=86.95% 00:19:40.150 nvme0n3: ios=1544/1536, merge=0/0, ticks=1449/306, in_queue=1755, util=98.32% 00:19:40.150 nvme0n4: ios=77/512, merge=0/0, ticks=978/123, in_queue=1101, util=98.10% 00:19:40.150 15:37:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:40.150 [global] 00:19:40.150 thread=1 00:19:40.150 invalidate=1 00:19:40.150 rw=randwrite 00:19:40.150 time_based=1 00:19:40.150 runtime=1 00:19:40.150 ioengine=libaio 00:19:40.150 direct=1 00:19:40.150 bs=4096 00:19:40.150 iodepth=1 00:19:40.150 norandommap=0 00:19:40.150 numjobs=1 00:19:40.150 00:19:40.150 verify_dump=1 00:19:40.150 verify_backlog=512 00:19:40.150 verify_state_save=0 00:19:40.150 do_verify=1 00:19:40.150 verify=crc32c-intel 00:19:40.150 [job0] 00:19:40.150 filename=/dev/nvme0n1 00:19:40.150 [job1] 00:19:40.150 filename=/dev/nvme0n2 00:19:40.150 [job2] 00:19:40.150 filename=/dev/nvme0n3 00:19:40.150 [job3] 00:19:40.150 filename=/dev/nvme0n4 00:19:40.408 Could not set queue depth (nvme0n1) 00:19:40.408 Could not set queue depth (nvme0n2) 00:19:40.408 Could not set queue depth (nvme0n3) 00:19:40.408 Could not set queue depth (nvme0n4) 00:19:40.408 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:40.408 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:40.408 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:40.408 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:40.408 fio-3.35 00:19:40.408 Starting 4 threads 00:19:41.780 00:19:41.780 job0: (groupid=0, jobs=1): err= 0: pid=1318150: Wed May 15 15:37:54 2024 00:19:41.780 read: IOPS=743, BW=2973KiB/s (3044kB/s)(2976KiB/1001msec) 00:19:41.780 slat (nsec): min=5082, max=59074, avg=15680.21, stdev=9388.94 00:19:41.780 clat (usec): min=213, max=41452, avg=1031.23, stdev=5334.43 00:19:41.780 lat (usec): min=220, max=41483, avg=1046.91, stdev=5334.92 00:19:41.780 clat percentiles (usec): 00:19:41.780 | 1.00th=[ 221], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 251], 00:19:41.780 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 306], 60.00th=[ 326], 00:19:41.780 | 70.00th=[ 355], 80.00th=[ 375], 90.00th=[ 461], 95.00th=[ 510], 00:19:41.780 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:19:41.780 | 99.99th=[41681] 00:19:41.780 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:19:41.780 slat (nsec): min=6109, max=55218, avg=10038.93, stdev=5715.77 00:19:41.780 clat (usec): min=144, max=384, avg=199.72, stdev=45.31 00:19:41.780 lat (usec): min=151, max=393, avg=209.76, stdev=47.89 00:19:41.780 clat percentiles (usec): 00:19:41.780 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:19:41.780 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 192], 60.00th=[ 210], 00:19:41.780 | 70.00th=[ 223], 80.00th=[ 237], 90.00th=[ 255], 95.00th=[ 285], 00:19:41.780 | 99.00th=[ 347], 99.50th=[ 371], 99.90th=[ 379], 99.95th=[ 383], 00:19:41.780 | 99.99th=[ 383] 00:19:41.780 bw ( KiB/s): min= 4096, max= 4096, per=23.07%, avg=4096.00, stdev= 0.00, samples=1 00:19:41.780 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:41.780 lat (usec) : 250=59.39%, 500=38.07%, 750=1.81% 00:19:41.780 lat (msec) : 50=0.74% 00:19:41.780 cpu : usr=1.40%, sys=2.30%, ctx=1768, majf=0, minf=1 00:19:41.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:41.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.780 issued rwts: total=744,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:41.780 job1: (groupid=0, jobs=1): err= 0: pid=1318156: Wed May 15 15:37:54 2024 00:19:41.780 read: IOPS=518, BW=2072KiB/s (2122kB/s)(2124KiB/1025msec) 00:19:41.780 slat (nsec): min=4783, max=62568, avg=14887.88, stdev=7485.89 00:19:41.780 clat (usec): min=241, max=41626, avg=1459.07, stdev=6753.05 00:19:41.780 lat (usec): min=251, max=41658, avg=1473.96, stdev=6753.83 00:19:41.780 clat percentiles (usec): 00:19:41.780 | 1.00th=[ 245], 5.00th=[ 251], 10.00th=[ 255], 20.00th=[ 262], 00:19:41.780 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 285], 00:19:41.780 | 70.00th=[ 310], 80.00th=[ 363], 90.00th=[ 474], 95.00th=[ 502], 00:19:41.780 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:19:41.780 | 99.99th=[41681] 00:19:41.780 write: IOPS=999, BW=3996KiB/s (4092kB/s)(4096KiB/1025msec); 0 zone resets 00:19:41.780 slat (nsec): min=6063, max=68233, avg=14091.21, stdev=6104.24 00:19:41.780 clat (usec): min=166, max=402, avg=216.10, stdev=44.39 00:19:41.780 lat (usec): min=173, max=422, avg=230.19, stdev=45.49 00:19:41.780 clat percentiles (usec): 00:19:41.780 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 186], 00:19:41.780 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 208], 00:19:41.780 | 70.00th=[ 225], 80.00th=[ 241], 90.00th=[ 273], 95.00th=[ 318], 00:19:41.780 | 99.00th=[ 383], 99.50th=[ 396], 99.90th=[ 400], 99.95th=[ 404], 00:19:41.780 | 99.99th=[ 404] 00:19:41.780 bw ( KiB/s): min= 8192, max= 8192, per=46.13%, avg=8192.00, stdev= 0.00, samples=1 00:19:41.780 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:41.780 lat (usec) : 250=56.98%, 500=41.22%, 750=0.84% 00:19:41.780 lat (msec) : 50=0.96% 00:19:41.780 cpu : usr=0.98%, sys=2.34%, ctx=1555, majf=0, minf=1 00:19:41.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:41.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.780 issued rwts: total=531,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:41.780 job2: (groupid=0, jobs=1): err= 0: pid=1318157: Wed May 15 15:37:54 2024 00:19:41.780 read: IOPS=510, BW=2042KiB/s (2091kB/s)(2120KiB/1038msec) 00:19:41.780 slat (nsec): min=6137, max=36846, avg=16187.08, stdev=4573.50 00:19:41.780 clat (usec): min=296, max=41346, avg=1433.55, stdev=6521.42 00:19:41.780 lat (usec): min=311, max=41361, avg=1449.74, stdev=6521.47 00:19:41.780 clat percentiles (usec): 00:19:41.780 | 1.00th=[ 306], 5.00th=[ 318], 10.00th=[ 326], 20.00th=[ 338], 00:19:41.780 | 30.00th=[ 343], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 355], 00:19:41.780 | 70.00th=[ 359], 80.00th=[ 367], 90.00th=[ 433], 95.00th=[ 515], 00:19:41.780 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:41.780 | 99.99th=[41157] 00:19:41.780 write: IOPS=986, BW=3946KiB/s (4041kB/s)(4096KiB/1038msec); 0 zone resets 00:19:41.780 slat (nsec): min=6292, max=51312, avg=16050.24, stdev=6646.82 00:19:41.780 clat (usec): min=173, max=440, avg=239.82, stdev=39.74 00:19:41.780 lat (usec): min=188, max=460, avg=255.87, stdev=39.43 00:19:41.780 clat percentiles (usec): 00:19:41.780 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 212], 00:19:41.780 | 30.00th=[ 221], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 241], 00:19:41.780 | 70.00th=[ 247], 80.00th=[ 260], 90.00th=[ 281], 95.00th=[ 330], 00:19:41.780 | 99.00th=[ 383], 99.50th=[ 400], 99.90th=[ 429], 99.95th=[ 441], 00:19:41.780 | 99.99th=[ 441] 00:19:41.780 bw ( KiB/s): min= 8192, max= 8192, per=46.13%, avg=8192.00, stdev= 0.00, samples=1 00:19:41.780 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:41.780 lat (usec) : 250=48.91%, 500=49.23%, 750=0.90%, 1000=0.06% 00:19:41.780 lat (msec) : 50=0.90% 00:19:41.780 cpu : usr=1.54%, sys=3.09%, ctx=1554, majf=0, minf=1 00:19:41.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:41.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.780 issued rwts: total=530,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:41.780 job3: (groupid=0, jobs=1): err= 0: pid=1318158: Wed May 15 15:37:54 2024 00:19:41.780 read: IOPS=1018, BW=4076KiB/s (4173kB/s)(4100KiB/1006msec) 00:19:41.780 slat (nsec): min=6040, max=43546, avg=11787.36, stdev=5822.21 00:19:41.780 clat (usec): min=240, max=41291, avg=627.89, stdev=3577.62 00:19:41.780 lat (usec): min=248, max=41308, avg=639.68, stdev=3577.77 00:19:41.780 clat percentiles (usec): 00:19:41.780 | 1.00th=[ 247], 5.00th=[ 255], 10.00th=[ 258], 20.00th=[ 265], 00:19:41.780 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 306], 60.00th=[ 330], 00:19:41.780 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 359], 95.00th=[ 396], 00:19:41.780 | 99.00th=[ 775], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:41.780 | 99.99th=[41157] 00:19:41.780 write: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec); 0 zone resets 00:19:41.780 slat (nsec): min=7895, max=57725, avg=13541.03, stdev=6280.89 00:19:41.780 clat (usec): min=166, max=416, avg=207.95, stdev=30.41 00:19:41.780 lat (usec): min=175, max=428, avg=221.49, stdev=33.45 00:19:41.780 clat percentiles (usec): 00:19:41.780 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 182], 00:19:41.780 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 202], 60.00th=[ 212], 00:19:41.780 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 245], 95.00th=[ 260], 00:19:41.780 | 99.00th=[ 314], 99.50th=[ 343], 99.90th=[ 383], 99.95th=[ 416], 00:19:41.780 | 99.99th=[ 416] 00:19:41.780 bw ( KiB/s): min= 6048, max= 6240, per=34.60%, avg=6144.00, stdev=135.76, samples=2 00:19:41.780 iops : min= 1512, max= 1560, avg=1536.00, stdev=33.94, samples=2 00:19:41.780 lat (usec) : 250=57.01%, 500=42.09%, 750=0.47%, 1000=0.08% 00:19:41.780 lat (msec) : 2=0.04%, 50=0.31% 00:19:41.780 cpu : usr=1.89%, sys=4.78%, ctx=2563, majf=0, minf=2 00:19:41.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:41.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.780 issued rwts: total=1025,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:41.780 00:19:41.780 Run status group 0 (all jobs): 00:19:41.780 READ: bw=10.6MiB/s (11.2MB/s), 2042KiB/s-4076KiB/s (2091kB/s-4173kB/s), io=11.1MiB (11.6MB), run=1001-1038msec 00:19:41.780 WRITE: bw=17.3MiB/s (18.2MB/s), 3946KiB/s-6107KiB/s (4041kB/s-6254kB/s), io=18.0MiB (18.9MB), run=1001-1038msec 00:19:41.780 00:19:41.780 Disk stats (read/write): 00:19:41.781 nvme0n1: ios=584/1024, merge=0/0, ticks=616/198, in_queue=814, util=85.77% 00:19:41.781 nvme0n2: ios=539/1024, merge=0/0, ticks=575/210, in_queue=785, util=86.79% 00:19:41.781 nvme0n3: ios=548/1024, merge=0/0, ticks=829/236, in_queue=1065, util=91.02% 00:19:41.781 nvme0n4: ios=1013/1024, merge=0/0, ticks=1511/216, in_queue=1727, util=97.47% 00:19:41.781 15:37:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:41.781 [global] 00:19:41.781 thread=1 00:19:41.781 invalidate=1 00:19:41.781 rw=write 00:19:41.781 time_based=1 00:19:41.781 runtime=1 00:19:41.781 ioengine=libaio 00:19:41.781 direct=1 00:19:41.781 bs=4096 00:19:41.781 iodepth=128 00:19:41.781 norandommap=0 00:19:41.781 numjobs=1 00:19:41.781 00:19:41.781 verify_dump=1 00:19:41.781 verify_backlog=512 00:19:41.781 verify_state_save=0 00:19:41.781 do_verify=1 00:19:41.781 verify=crc32c-intel 00:19:41.781 [job0] 00:19:41.781 filename=/dev/nvme0n1 00:19:41.781 [job1] 00:19:41.781 filename=/dev/nvme0n2 00:19:41.781 [job2] 00:19:41.781 filename=/dev/nvme0n3 00:19:41.781 [job3] 00:19:41.781 filename=/dev/nvme0n4 00:19:41.781 Could not set queue depth (nvme0n1) 00:19:41.781 Could not set queue depth (nvme0n2) 00:19:41.781 Could not set queue depth (nvme0n3) 00:19:41.781 Could not set queue depth (nvme0n4) 00:19:42.038 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:42.038 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:42.038 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:42.038 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:42.038 fio-3.35 00:19:42.038 Starting 4 threads 00:19:43.414 00:19:43.414 job0: (groupid=0, jobs=1): err= 0: pid=1318388: Wed May 15 15:37:56 2024 00:19:43.414 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:19:43.414 slat (usec): min=2, max=19101, avg=94.80, stdev=678.71 00:19:43.414 clat (usec): min=3778, max=72943, avg=12823.52, stdev=6693.48 00:19:43.414 lat (usec): min=3786, max=72948, avg=12918.32, stdev=6728.30 00:19:43.414 clat percentiles (usec): 00:19:43.414 | 1.00th=[ 4883], 5.00th=[ 7635], 10.00th=[ 9372], 20.00th=[10028], 00:19:43.414 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11469], 60.00th=[11994], 00:19:43.414 | 70.00th=[12387], 80.00th=[12780], 90.00th=[16057], 95.00th=[28705], 00:19:43.414 | 99.00th=[42730], 99.50th=[43779], 99.90th=[53740], 99.95th=[72877], 00:19:43.414 | 99.99th=[72877] 00:19:43.414 write: IOPS=5402, BW=21.1MiB/s (22.1MB/s)(21.2MiB/1003msec); 0 zone resets 00:19:43.414 slat (usec): min=3, max=10656, avg=81.65, stdev=549.84 00:19:43.414 clat (usec): min=599, max=30701, avg=11117.39, stdev=3901.44 00:19:43.414 lat (usec): min=612, max=30711, avg=11199.04, stdev=3933.73 00:19:43.414 clat percentiles (usec): 00:19:43.414 | 1.00th=[ 2089], 5.00th=[ 4948], 10.00th=[ 7308], 20.00th=[ 9110], 00:19:43.414 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10814], 60.00th=[11469], 00:19:43.414 | 70.00th=[11731], 80.00th=[12256], 90.00th=[14222], 95.00th=[18482], 00:19:43.414 | 99.00th=[27395], 99.50th=[28967], 99.90th=[30802], 99.95th=[30802], 00:19:43.414 | 99.99th=[30802] 00:19:43.414 bw ( KiB/s): min=20848, max=21488, per=31.78%, avg=21168.00, stdev=452.55, samples=2 00:19:43.414 iops : min= 5212, max= 5372, avg=5292.00, stdev=113.14, samples=2 00:19:43.414 lat (usec) : 750=0.06%, 1000=0.24% 00:19:43.414 lat (msec) : 2=0.20%, 4=1.34%, 10=22.82%, 20=70.71%, 50=4.54% 00:19:43.414 lat (msec) : 100=0.10% 00:19:43.414 cpu : usr=4.09%, sys=7.09%, ctx=444, majf=0, minf=11 00:19:43.414 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:43.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:43.414 issued rwts: total=5120,5419,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.414 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:43.414 job1: (groupid=0, jobs=1): err= 0: pid=1318390: Wed May 15 15:37:56 2024 00:19:43.414 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:19:43.414 slat (usec): min=2, max=13720, avg=109.98, stdev=764.16 00:19:43.414 clat (usec): min=2570, max=44578, avg=13737.24, stdev=5602.20 00:19:43.414 lat (usec): min=2610, max=44606, avg=13847.22, stdev=5655.29 00:19:43.414 clat percentiles (usec): 00:19:43.414 | 1.00th=[ 4686], 5.00th=[ 7570], 10.00th=[ 9110], 20.00th=[10159], 00:19:43.414 | 30.00th=[11076], 40.00th=[11600], 50.00th=[11863], 60.00th=[12518], 00:19:43.414 | 70.00th=[13698], 80.00th=[16909], 90.00th=[21890], 95.00th=[27395], 00:19:43.414 | 99.00th=[31589], 99.50th=[31851], 99.90th=[34341], 99.95th=[37487], 00:19:43.414 | 99.99th=[44827] 00:19:43.414 write: IOPS=4885, BW=19.1MiB/s (20.0MB/s)(19.3MiB/1010msec); 0 zone resets 00:19:43.414 slat (usec): min=4, max=16206, avg=87.20, stdev=567.03 00:19:43.414 clat (usec): min=2318, max=65774, avg=12882.96, stdev=7353.38 00:19:43.414 lat (usec): min=2324, max=65787, avg=12970.16, stdev=7389.70 00:19:43.414 clat percentiles (usec): 00:19:43.414 | 1.00th=[ 3392], 5.00th=[ 5735], 10.00th=[ 6915], 20.00th=[ 9372], 00:19:43.414 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10945], 60.00th=[11863], 00:19:43.414 | 70.00th=[12518], 80.00th=[15008], 90.00th=[19268], 95.00th=[29754], 00:19:43.414 | 99.00th=[45351], 99.50th=[51119], 99.90th=[60556], 99.95th=[60556], 00:19:43.414 | 99.99th=[65799] 00:19:43.414 bw ( KiB/s): min=16913, max=21568, per=28.89%, avg=19240.50, stdev=3291.58, samples=2 00:19:43.414 iops : min= 4228, max= 5392, avg=4810.00, stdev=823.07, samples=2 00:19:43.414 lat (msec) : 4=1.07%, 10=21.10%, 20=67.00%, 50=10.43%, 100=0.41% 00:19:43.414 cpu : usr=5.85%, sys=8.72%, ctx=474, majf=0, minf=19 00:19:43.414 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:43.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:43.414 issued rwts: total=4608,4934,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.414 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:43.414 job2: (groupid=0, jobs=1): err= 0: pid=1318391: Wed May 15 15:37:56 2024 00:19:43.414 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:19:43.414 slat (usec): min=3, max=14355, avg=146.12, stdev=963.64 00:19:43.414 clat (usec): min=8506, max=55344, avg=18596.47, stdev=8452.73 00:19:43.415 lat (usec): min=8519, max=57485, avg=18742.58, stdev=8542.56 00:19:43.415 clat percentiles (usec): 00:19:43.415 | 1.00th=[ 9765], 5.00th=[11994], 10.00th=[12256], 20.00th=[13042], 00:19:43.415 | 30.00th=[14091], 40.00th=[14615], 50.00th=[15401], 60.00th=[15664], 00:19:43.415 | 70.00th=[17695], 80.00th=[23462], 90.00th=[32113], 95.00th=[36963], 00:19:43.415 | 99.00th=[44827], 99.50th=[49546], 99.90th=[55313], 99.95th=[55313], 00:19:43.415 | 99.99th=[55313] 00:19:43.415 write: IOPS=2898, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1007msec); 0 zone resets 00:19:43.415 slat (usec): min=4, max=17556, avg=203.64, stdev=1051.65 00:19:43.415 clat (usec): min=1145, max=99628, avg=27444.81, stdev=21659.43 00:19:43.415 lat (usec): min=1154, max=99638, avg=27648.45, stdev=21804.55 00:19:43.415 clat percentiles (msec): 00:19:43.415 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 13], 20.00th=[ 14], 00:19:43.415 | 30.00th=[ 15], 40.00th=[ 17], 50.00th=[ 18], 60.00th=[ 21], 00:19:43.415 | 70.00th=[ 28], 80.00th=[ 40], 90.00th=[ 64], 95.00th=[ 82], 00:19:43.415 | 99.00th=[ 99], 99.50th=[ 99], 99.90th=[ 101], 99.95th=[ 101], 00:19:43.415 | 99.99th=[ 101] 00:19:43.415 bw ( KiB/s): min= 9232, max=13104, per=16.77%, avg=11168.00, stdev=2737.92, samples=2 00:19:43.415 iops : min= 2308, max= 3276, avg=2792.00, stdev=684.48, samples=2 00:19:43.415 lat (msec) : 2=0.16%, 4=0.29%, 10=4.25%, 20=63.00%, 50=25.10% 00:19:43.415 lat (msec) : 100=7.19% 00:19:43.415 cpu : usr=2.49%, sys=3.68%, ctx=364, majf=0, minf=13 00:19:43.415 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:43.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:43.415 issued rwts: total=2560,2919,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.415 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:43.415 job3: (groupid=0, jobs=1): err= 0: pid=1318392: Wed May 15 15:37:56 2024 00:19:43.415 read: IOPS=3620, BW=14.1MiB/s (14.8MB/s)(14.8MiB/1043msec) 00:19:43.415 slat (usec): min=2, max=20588, avg=128.61, stdev=762.35 00:19:43.415 clat (usec): min=4194, max=56697, avg=17897.71, stdev=8314.61 00:19:43.415 lat (usec): min=4203, max=56701, avg=18026.32, stdev=8332.10 00:19:43.415 clat percentiles (usec): 00:19:43.415 | 1.00th=[ 4293], 5.00th=[ 9634], 10.00th=[12125], 20.00th=[13042], 00:19:43.415 | 30.00th=[14353], 40.00th=[15139], 50.00th=[16319], 60.00th=[17433], 00:19:43.415 | 70.00th=[18744], 80.00th=[19530], 90.00th=[22676], 95.00th=[42206], 00:19:43.415 | 99.00th=[52691], 99.50th=[56361], 99.90th=[56886], 99.95th=[56886], 00:19:43.415 | 99.99th=[56886] 00:19:43.415 write: IOPS=3927, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1043msec); 0 zone resets 00:19:43.415 slat (usec): min=4, max=18183, avg=116.82, stdev=746.30 00:19:43.415 clat (usec): min=2436, max=64681, avg=15729.71, stdev=6510.92 00:19:43.415 lat (usec): min=2478, max=64704, avg=15846.53, stdev=6556.05 00:19:43.415 clat percentiles (usec): 00:19:43.415 | 1.00th=[ 4424], 5.00th=[ 8455], 10.00th=[10290], 20.00th=[12387], 00:19:43.415 | 30.00th=[13566], 40.00th=[14615], 50.00th=[15270], 60.00th=[15533], 00:19:43.415 | 70.00th=[16188], 80.00th=[17171], 90.00th=[20841], 95.00th=[24249], 00:19:43.415 | 99.00th=[49546], 99.50th=[60031], 99.90th=[62653], 99.95th=[62653], 00:19:43.415 | 99.99th=[64750] 00:19:43.415 bw ( KiB/s): min=16384, max=16384, per=24.60%, avg=16384.00, stdev= 0.00, samples=2 00:19:43.415 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:19:43.415 lat (msec) : 4=0.05%, 10=7.41%, 20=78.66%, 50=12.60%, 100=1.28% 00:19:43.415 cpu : usr=4.70%, sys=7.01%, ctx=325, majf=0, minf=9 00:19:43.415 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:43.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:43.415 issued rwts: total=3776,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.415 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:43.415 00:19:43.415 Run status group 0 (all jobs): 00:19:43.415 READ: bw=60.2MiB/s (63.1MB/s), 9.93MiB/s-19.9MiB/s (10.4MB/s-20.9MB/s), io=62.8MiB (65.8MB), run=1003-1043msec 00:19:43.415 WRITE: bw=65.0MiB/s (68.2MB/s), 11.3MiB/s-21.1MiB/s (11.9MB/s-22.1MB/s), io=67.8MiB (71.1MB), run=1003-1043msec 00:19:43.415 00:19:43.415 Disk stats (read/write): 00:19:43.415 nvme0n1: ios=4133/4342, merge=0/0, ticks=34594/33188, in_queue=67782, util=99.30% 00:19:43.415 nvme0n2: ios=3629/3827, merge=0/0, ticks=35257/28347, in_queue=63604, util=98.25% 00:19:43.415 nvme0n3: ios=1838/2048, merge=0/0, ticks=18839/26053, in_queue=44892, util=98.92% 00:19:43.415 nvme0n4: ios=3072/3220, merge=0/0, ticks=22452/23005, in_queue=45457, util=88.66% 00:19:43.415 15:37:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:43.415 [global] 00:19:43.415 thread=1 00:19:43.415 invalidate=1 00:19:43.415 rw=randwrite 00:19:43.415 time_based=1 00:19:43.415 runtime=1 00:19:43.415 ioengine=libaio 00:19:43.415 direct=1 00:19:43.415 bs=4096 00:19:43.415 iodepth=128 00:19:43.415 norandommap=0 00:19:43.415 numjobs=1 00:19:43.415 00:19:43.415 verify_dump=1 00:19:43.415 verify_backlog=512 00:19:43.415 verify_state_save=0 00:19:43.415 do_verify=1 00:19:43.415 verify=crc32c-intel 00:19:43.415 [job0] 00:19:43.415 filename=/dev/nvme0n1 00:19:43.415 [job1] 00:19:43.415 filename=/dev/nvme0n2 00:19:43.415 [job2] 00:19:43.415 filename=/dev/nvme0n3 00:19:43.415 [job3] 00:19:43.415 filename=/dev/nvme0n4 00:19:43.415 Could not set queue depth (nvme0n1) 00:19:43.415 Could not set queue depth (nvme0n2) 00:19:43.415 Could not set queue depth (nvme0n3) 00:19:43.415 Could not set queue depth (nvme0n4) 00:19:43.415 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:43.415 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:43.415 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:43.415 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:43.415 fio-3.35 00:19:43.415 Starting 4 threads 00:19:44.788 00:19:44.788 job0: (groupid=0, jobs=1): err= 0: pid=1318622: Wed May 15 15:37:57 2024 00:19:44.788 read: IOPS=4170, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1008msec) 00:19:44.788 slat (usec): min=2, max=11854, avg=99.47, stdev=741.94 00:19:44.788 clat (usec): min=2663, max=39671, avg=14199.78, stdev=6244.75 00:19:44.788 lat (usec): min=4421, max=39676, avg=14299.25, stdev=6288.70 00:19:44.788 clat percentiles (usec): 00:19:44.788 | 1.00th=[ 7177], 5.00th=[ 8979], 10.00th=[10159], 20.00th=[10552], 00:19:44.788 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11469], 60.00th=[12518], 00:19:44.788 | 70.00th=[15008], 80.00th=[17171], 90.00th=[21103], 95.00th=[27395], 00:19:44.788 | 99.00th=[38011], 99.50th=[38011], 99.90th=[39584], 99.95th=[39584], 00:19:44.788 | 99.99th=[39584] 00:19:44.788 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:19:44.788 slat (usec): min=3, max=8545, avg=99.29, stdev=543.01 00:19:44.788 clat (usec): min=859, max=43690, avg=14663.09, stdev=8433.90 00:19:44.788 lat (usec): min=867, max=43694, avg=14762.38, stdev=8491.53 00:19:44.788 clat percentiles (usec): 00:19:44.788 | 1.00th=[ 3064], 5.00th=[ 6128], 10.00th=[ 6783], 20.00th=[ 8717], 00:19:44.788 | 30.00th=[ 9896], 40.00th=[10945], 50.00th=[11469], 60.00th=[11863], 00:19:44.788 | 70.00th=[14746], 80.00th=[22152], 90.00th=[28967], 95.00th=[31851], 00:19:44.788 | 99.00th=[36963], 99.50th=[39584], 99.90th=[41157], 99.95th=[41157], 00:19:44.788 | 99.99th=[43779] 00:19:44.788 bw ( KiB/s): min=17488, max=19216, per=30.46%, avg=18352.00, stdev=1221.88, samples=2 00:19:44.788 iops : min= 4372, max= 4804, avg=4588.00, stdev=305.47, samples=2 00:19:44.788 lat (usec) : 1000=0.08% 00:19:44.788 lat (msec) : 2=0.12%, 4=0.76%, 10=19.22%, 20=62.04%, 50=17.77% 00:19:44.788 cpu : usr=3.97%, sys=5.16%, ctx=445, majf=0, minf=15 00:19:44.788 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:44.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.788 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:44.788 issued rwts: total=4204,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:44.788 job1: (groupid=0, jobs=1): err= 0: pid=1318623: Wed May 15 15:37:57 2024 00:19:44.788 read: IOPS=3504, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1008msec) 00:19:44.788 slat (usec): min=2, max=16871, avg=142.46, stdev=770.12 00:19:44.788 clat (usec): min=3924, max=42210, avg=18228.35, stdev=5780.29 00:19:44.788 lat (usec): min=7331, max=42229, avg=18370.81, stdev=5841.44 00:19:44.788 clat percentiles (usec): 00:19:44.788 | 1.00th=[ 7701], 5.00th=[10814], 10.00th=[11469], 20.00th=[12649], 00:19:44.788 | 30.00th=[14091], 40.00th=[16188], 50.00th=[17695], 60.00th=[19792], 00:19:44.788 | 70.00th=[21103], 80.00th=[22676], 90.00th=[25560], 95.00th=[28181], 00:19:44.788 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36439], 99.95th=[41681], 00:19:44.788 | 99.99th=[42206] 00:19:44.788 write: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:19:44.788 slat (usec): min=3, max=7723, avg=129.51, stdev=591.69 00:19:44.788 clat (usec): min=6389, max=39925, avg=17476.93, stdev=7410.31 00:19:44.788 lat (usec): min=6407, max=39934, avg=17606.44, stdev=7465.43 00:19:44.788 clat percentiles (usec): 00:19:44.788 | 1.00th=[ 8717], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11863], 00:19:44.788 | 30.00th=[12387], 40.00th=[13566], 50.00th=[15139], 60.00th=[18482], 00:19:44.789 | 70.00th=[19268], 80.00th=[20317], 90.00th=[28443], 95.00th=[35914], 00:19:44.789 | 99.00th=[39060], 99.50th=[39584], 99.90th=[40109], 99.95th=[40109], 00:19:44.789 | 99.99th=[40109] 00:19:44.789 bw ( KiB/s): min=12272, max=16400, per=23.80%, avg=14336.00, stdev=2918.94, samples=2 00:19:44.789 iops : min= 3068, max= 4100, avg=3584.00, stdev=729.73, samples=2 00:19:44.789 lat (msec) : 4=0.01%, 10=4.34%, 20=67.11%, 50=28.54% 00:19:44.789 cpu : usr=4.87%, sys=6.55%, ctx=431, majf=0, minf=9 00:19:44.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:19:44.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:44.789 issued rwts: total=3533,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:44.789 job2: (groupid=0, jobs=1): err= 0: pid=1318624: Wed May 15 15:37:57 2024 00:19:44.789 read: IOPS=2869, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1004msec) 00:19:44.789 slat (usec): min=2, max=17892, avg=185.16, stdev=1101.17 00:19:44.789 clat (usec): min=1061, max=48856, avg=23482.43, stdev=9558.47 00:19:44.789 lat (usec): min=10367, max=48860, avg=23667.60, stdev=9570.35 00:19:44.789 clat percentiles (usec): 00:19:44.789 | 1.00th=[10552], 5.00th=[11863], 10.00th=[13173], 20.00th=[14222], 00:19:44.789 | 30.00th=[15401], 40.00th=[19530], 50.00th=[22152], 60.00th=[25560], 00:19:44.789 | 70.00th=[28181], 80.00th=[30016], 90.00th=[38011], 95.00th=[45351], 00:19:44.789 | 99.00th=[49021], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:19:44.789 | 99.99th=[49021] 00:19:44.789 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:19:44.789 slat (usec): min=3, max=10216, avg=145.64, stdev=741.47 00:19:44.789 clat (usec): min=9954, max=32995, avg=19262.61, stdev=5039.95 00:19:44.789 lat (usec): min=9988, max=33028, avg=19408.25, stdev=5027.65 00:19:44.789 clat percentiles (usec): 00:19:44.789 | 1.00th=[10159], 5.00th=[12125], 10.00th=[13042], 20.00th=[13960], 00:19:44.789 | 30.00th=[15926], 40.00th=[17695], 50.00th=[19268], 60.00th=[19792], 00:19:44.789 | 70.00th=[21627], 80.00th=[25035], 90.00th=[25560], 95.00th=[27919], 00:19:44.789 | 99.00th=[32900], 99.50th=[32900], 99.90th=[32900], 99.95th=[32900], 00:19:44.789 | 99.99th=[32900] 00:19:44.789 bw ( KiB/s): min= 8696, max=15880, per=20.40%, avg=12288.00, stdev=5079.86, samples=2 00:19:44.789 iops : min= 2174, max= 3970, avg=3072.00, stdev=1269.96, samples=2 00:19:44.789 lat (msec) : 2=0.02%, 10=0.12%, 20=52.95%, 50=46.92% 00:19:44.789 cpu : usr=2.49%, sys=4.29%, ctx=267, majf=0, minf=13 00:19:44.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:44.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:44.789 issued rwts: total=2881,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:44.789 job3: (groupid=0, jobs=1): err= 0: pid=1318625: Wed May 15 15:37:57 2024 00:19:44.789 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:19:44.789 slat (usec): min=2, max=16635, avg=147.75, stdev=921.31 00:19:44.789 clat (msec): min=8, max=102, avg=18.62, stdev=18.63 00:19:44.789 lat (msec): min=8, max=102, avg=18.77, stdev=18.76 00:19:44.789 clat percentiles (msec): 00:19:44.789 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 11], 00:19:44.789 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 13], 00:19:44.789 | 70.00th=[ 16], 80.00th=[ 20], 90.00th=[ 26], 95.00th=[ 74], 00:19:44.789 | 99.00th=[ 99], 99.50th=[ 100], 99.90th=[ 103], 99.95th=[ 103], 00:19:44.789 | 99.99th=[ 103] 00:19:44.789 write: IOPS=3902, BW=15.2MiB/s (16.0MB/s)(15.3MiB/1004msec); 0 zone resets 00:19:44.789 slat (usec): min=3, max=20201, avg=112.25, stdev=752.77 00:19:44.789 clat (usec): min=471, max=63720, avg=15330.71, stdev=9466.72 00:19:44.789 lat (usec): min=4040, max=63730, avg=15442.96, stdev=9505.88 00:19:44.789 clat percentiles (usec): 00:19:44.789 | 1.00th=[ 4490], 5.00th=[ 8848], 10.00th=[10290], 20.00th=[10683], 00:19:44.789 | 30.00th=[11207], 40.00th=[11994], 50.00th=[12125], 60.00th=[12387], 00:19:44.789 | 70.00th=[14746], 80.00th=[17695], 90.00th=[20579], 95.00th=[37487], 00:19:44.789 | 99.00th=[59507], 99.50th=[59507], 99.90th=[63701], 99.95th=[63701], 00:19:44.789 | 99.99th=[63701] 00:19:44.789 bw ( KiB/s): min= 8192, max=22128, per=25.16%, avg=15160.00, stdev=9854.24, samples=2 00:19:44.789 iops : min= 2048, max= 5532, avg=3790.00, stdev=2463.56, samples=2 00:19:44.789 lat (usec) : 500=0.01% 00:19:44.789 lat (msec) : 10=4.68%, 20=81.06%, 50=10.02%, 100=4.05%, 250=0.17% 00:19:44.789 cpu : usr=4.19%, sys=6.68%, ctx=289, majf=0, minf=13 00:19:44.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:44.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:44.789 issued rwts: total=3584,3918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:44.789 00:19:44.789 Run status group 0 (all jobs): 00:19:44.789 READ: bw=55.0MiB/s (57.7MB/s), 11.2MiB/s-16.3MiB/s (11.8MB/s-17.1MB/s), io=55.5MiB (58.2MB), run=1004-1008msec 00:19:44.789 WRITE: bw=58.8MiB/s (61.7MB/s), 12.0MiB/s-17.9MiB/s (12.5MB/s-18.7MB/s), io=59.3MiB (62.2MB), run=1004-1008msec 00:19:44.789 00:19:44.789 Disk stats (read/write): 00:19:44.789 nvme0n1: ios=3623/4079, merge=0/0, ticks=39064/40764, in_queue=79828, util=96.49% 00:19:44.789 nvme0n2: ios=3118/3351, merge=0/0, ticks=17238/15930, in_queue=33168, util=96.85% 00:19:44.789 nvme0n3: ios=2612/2688, merge=0/0, ticks=14515/12539, in_queue=27054, util=96.88% 00:19:44.789 nvme0n4: ios=2739/3072, merge=0/0, ticks=16210/14306, in_queue=30516, util=96.33% 00:19:44.789 15:37:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:44.789 15:37:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1318762 00:19:44.789 15:37:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:44.789 15:37:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:44.789 [global] 00:19:44.789 thread=1 00:19:44.789 invalidate=1 00:19:44.789 rw=read 00:19:44.789 time_based=1 00:19:44.789 runtime=10 00:19:44.789 ioengine=libaio 00:19:44.789 direct=1 00:19:44.789 bs=4096 00:19:44.789 iodepth=1 00:19:44.789 norandommap=1 00:19:44.789 numjobs=1 00:19:44.789 00:19:44.789 [job0] 00:19:44.789 filename=/dev/nvme0n1 00:19:44.789 [job1] 00:19:44.789 filename=/dev/nvme0n2 00:19:44.789 [job2] 00:19:44.789 filename=/dev/nvme0n3 00:19:44.789 [job3] 00:19:44.789 filename=/dev/nvme0n4 00:19:44.789 Could not set queue depth (nvme0n1) 00:19:44.789 Could not set queue depth (nvme0n2) 00:19:44.789 Could not set queue depth (nvme0n3) 00:19:44.789 Could not set queue depth (nvme0n4) 00:19:44.789 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:44.789 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:44.789 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:44.789 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:44.789 fio-3.35 00:19:44.789 Starting 4 threads 00:19:48.076 15:38:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:48.077 15:38:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:48.077 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=28987392, buflen=4096 00:19:48.077 fio: pid=1318880, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:48.077 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=7086080, buflen=4096 00:19:48.077 fio: pid=1318870, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:48.077 15:38:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:48.077 15:38:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:48.643 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=380928, buflen=4096 00:19:48.643 fio: pid=1318850, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:48.643 15:38:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:48.643 15:38:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:48.643 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=13168640, buflen=4096 00:19:48.643 fio: pid=1318853, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:48.643 15:38:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:48.643 15:38:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:48.643 00:19:48.643 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1318850: Wed May 15 15:38:01 2024 00:19:48.643 read: IOPS=27, BW=109KiB/s (111kB/s)(372KiB/3419msec) 00:19:48.643 slat (usec): min=9, max=5847, avg=115.40, stdev=666.41 00:19:48.643 clat (usec): min=333, max=41492, avg=36635.13, stdev=12600.27 00:19:48.643 lat (usec): min=347, max=47049, avg=36751.58, stdev=12651.39 00:19:48.643 clat percentiles (usec): 00:19:48.643 | 1.00th=[ 334], 5.00th=[ 529], 10.00th=[ 734], 20.00th=[41157], 00:19:48.643 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:48.643 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:48.643 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:48.643 | 99.99th=[41681] 00:19:48.643 bw ( KiB/s): min= 96, max= 136, per=0.83%, avg=110.67, stdev=13.78, samples=6 00:19:48.643 iops : min= 24, max= 34, avg=27.67, stdev= 3.44, samples=6 00:19:48.643 lat (usec) : 500=4.26%, 750=6.38% 00:19:48.643 lat (msec) : 50=88.30% 00:19:48.643 cpu : usr=0.12%, sys=0.00%, ctx=96, majf=0, minf=1 00:19:48.643 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.643 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.643 issued rwts: total=94,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.643 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.643 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1318853: Wed May 15 15:38:01 2024 00:19:48.643 read: IOPS=877, BW=3508KiB/s (3592kB/s)(12.6MiB/3666msec) 00:19:48.643 slat (usec): min=5, max=8918, avg=14.03, stdev=189.18 00:19:48.643 clat (usec): min=227, max=43851, avg=1123.81, stdev=5734.97 00:19:48.643 lat (usec): min=233, max=43877, avg=1137.83, stdev=5741.53 00:19:48.643 clat percentiles (usec): 00:19:48.643 | 1.00th=[ 237], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 265], 00:19:48.643 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 310], 00:19:48.643 | 70.00th=[ 318], 80.00th=[ 330], 90.00th=[ 338], 95.00th=[ 355], 00:19:48.643 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:48.643 | 99.99th=[43779] 00:19:48.643 bw ( KiB/s): min= 104, max=11808, per=27.13%, avg=3586.14, stdev=4416.80, samples=7 00:19:48.643 iops : min= 26, max= 2952, avg=896.43, stdev=1104.21, samples=7 00:19:48.643 lat (usec) : 250=9.33%, 500=88.09%, 750=0.50% 00:19:48.643 lat (msec) : 10=0.03%, 50=2.02% 00:19:48.643 cpu : usr=0.52%, sys=1.26%, ctx=3219, majf=0, minf=1 00:19:48.643 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.643 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.643 issued rwts: total=3216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.643 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.643 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1318870: Wed May 15 15:38:01 2024 00:19:48.643 read: IOPS=553, BW=2214KiB/s (2268kB/s)(6920KiB/3125msec) 00:19:48.643 slat (usec): min=5, max=20910, avg=32.07, stdev=502.22 00:19:48.643 clat (usec): min=260, max=41420, avg=1769.12, stdev=7308.31 00:19:48.643 lat (usec): min=279, max=61956, avg=1801.20, stdev=7390.49 00:19:48.643 clat percentiles (usec): 00:19:48.643 | 1.00th=[ 302], 5.00th=[ 318], 10.00th=[ 326], 20.00th=[ 355], 00:19:48.643 | 30.00th=[ 379], 40.00th=[ 392], 50.00th=[ 408], 60.00th=[ 429], 00:19:48.643 | 70.00th=[ 445], 80.00th=[ 465], 90.00th=[ 498], 95.00th=[ 537], 00:19:48.643 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:19:48.643 | 99.99th=[41681] 00:19:48.643 bw ( KiB/s): min= 96, max= 8496, per=17.41%, avg=2302.67, stdev=3174.25, samples=6 00:19:48.643 iops : min= 24, max= 2124, avg=575.67, stdev=793.56, samples=6 00:19:48.643 lat (usec) : 500=90.12%, 750=6.47% 00:19:48.643 lat (msec) : 50=3.35% 00:19:48.643 cpu : usr=0.45%, sys=1.31%, ctx=1732, majf=0, minf=1 00:19:48.643 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.643 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.643 issued rwts: total=1731,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.643 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.643 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1318880: Wed May 15 15:38:01 2024 00:19:48.643 read: IOPS=2441, BW=9765KiB/s (9999kB/s)(27.6MiB/2899msec) 00:19:48.643 slat (nsec): min=4955, max=74493, avg=18536.97, stdev=10181.92 00:19:48.643 clat (usec): min=235, max=42409, avg=386.76, stdev=1217.01 00:19:48.643 lat (usec): min=243, max=42421, avg=405.29, stdev=1217.21 00:19:48.643 clat percentiles (usec): 00:19:48.643 | 1.00th=[ 247], 5.00th=[ 258], 10.00th=[ 269], 20.00th=[ 281], 00:19:48.643 | 30.00th=[ 297], 40.00th=[ 314], 50.00th=[ 334], 60.00th=[ 359], 00:19:48.643 | 70.00th=[ 392], 80.00th=[ 424], 90.00th=[ 457], 95.00th=[ 490], 00:19:48.643 | 99.00th=[ 545], 99.50th=[ 570], 99.90th=[ 1467], 99.95th=[42206], 00:19:48.643 | 99.99th=[42206] 00:19:48.643 bw ( KiB/s): min= 8712, max=12816, per=79.40%, avg=10496.00, stdev=1624.69, samples=5 00:19:48.643 iops : min= 2178, max= 3204, avg=2624.00, stdev=406.17, samples=5 00:19:48.643 lat (usec) : 250=2.02%, 500=94.25%, 750=3.56%, 1000=0.03% 00:19:48.643 lat (msec) : 2=0.04%, 50=0.08% 00:19:48.643 cpu : usr=1.55%, sys=6.11%, ctx=7081, majf=0, minf=1 00:19:48.643 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.643 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.643 issued rwts: total=7078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.643 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.643 00:19:48.643 Run status group 0 (all jobs): 00:19:48.643 READ: bw=12.9MiB/s (13.5MB/s), 109KiB/s-9765KiB/s (111kB/s-9999kB/s), io=47.3MiB (49.6MB), run=2899-3666msec 00:19:48.643 00:19:48.643 Disk stats (read/write): 00:19:48.643 nvme0n1: ios=91/0, merge=0/0, ticks=3328/0, in_queue=3328, util=95.77% 00:19:48.643 nvme0n2: ios=3213/0, merge=0/0, ticks=3507/0, in_queue=3507, util=96.09% 00:19:48.643 nvme0n3: ios=1729/0, merge=0/0, ticks=2996/0, in_queue=2996, util=96.13% 00:19:48.643 nvme0n4: ios=7116/0, merge=0/0, ticks=3448/0, in_queue=3448, util=99.80% 00:19:48.900 15:38:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:48.900 15:38:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:49.159 15:38:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:49.159 15:38:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:49.416 15:38:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:49.416 15:38:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:49.674 15:38:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:49.674 15:38:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:49.932 15:38:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:49.932 15:38:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1318762 00:19:49.932 15:38:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:49.932 15:38:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:50.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:50.190 15:38:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:50.190 15:38:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:19:50.190 15:38:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:50.190 15:38:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:50.190 15:38:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:50.190 15:38:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:50.190 15:38:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:19:50.190 15:38:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:50.190 15:38:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:50.190 nvmf hotplug test: fio failed as expected 00:19:50.190 15:38:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:50.447 rmmod nvme_tcp 00:19:50.447 rmmod nvme_fabrics 00:19:50.447 rmmod nvme_keyring 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1316725 ']' 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1316725 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 1316725 ']' 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 1316725 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1316725 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1316725' 00:19:50.447 killing process with pid 1316725 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 1316725 00:19:50.447 [2024-05-15 15:38:03.449407] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:50.447 15:38:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 1316725 00:19:50.704 15:38:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:50.704 15:38:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:50.704 15:38:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:50.704 15:38:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:50.704 15:38:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:50.704 15:38:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.704 15:38:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:50.704 15:38:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.641 15:38:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:52.641 00:19:52.641 real 0m24.397s 00:19:52.641 user 1m22.688s 00:19:52.641 sys 0m7.320s 00:19:52.641 15:38:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:52.641 15:38:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.641 ************************************ 00:19:52.641 END TEST nvmf_fio_target 00:19:52.641 ************************************ 00:19:52.642 15:38:05 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:52.642 15:38:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:52.642 15:38:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:52.642 15:38:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:52.900 ************************************ 00:19:52.900 START TEST nvmf_bdevio 00:19:52.900 ************************************ 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:52.900 * Looking for test storage... 00:19:52.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:52.900 15:38:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:52.901 15:38:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:52.901 15:38:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:52.901 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:52.901 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.901 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:52.901 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:52.901 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:52.901 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.901 15:38:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.901 15:38:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.901 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:52.901 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:52.901 15:38:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:52.901 15:38:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:55.430 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:55.430 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:55.430 Found net devices under 0000:09:00.0: cvl_0_0 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:55.430 Found net devices under 0000:09:00.1: cvl_0_1 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:55.430 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:55.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:55.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:19:55.431 00:19:55.431 --- 10.0.0.2 ping statistics --- 00:19:55.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.431 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:55.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:55.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:19:55.431 00:19:55.431 --- 10.0.0.1 ping statistics --- 00:19:55.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.431 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1321878 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1321878 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 1321878 ']' 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:55.431 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:55.431 [2024-05-15 15:38:08.518106] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:19:55.431 [2024-05-15 15:38:08.518201] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.689 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.689 [2024-05-15 15:38:08.563671] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:55.689 [2024-05-15 15:38:08.595227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:55.689 [2024-05-15 15:38:08.679793] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.689 [2024-05-15 15:38:08.679861] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.689 [2024-05-15 15:38:08.679875] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.689 [2024-05-15 15:38:08.679886] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.689 [2024-05-15 15:38:08.679902] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.689 [2024-05-15 15:38:08.679994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:55.689 [2024-05-15 15:38:08.680490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:55.689 [2024-05-15 15:38:08.680543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:55.689 [2024-05-15 15:38:08.680547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:55.946 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:55.947 [2024-05-15 15:38:08.837008] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:55.947 Malloc0 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:55.947 [2024-05-15 15:38:08.890465] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:55.947 [2024-05-15 15:38:08.890792] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.947 { 00:19:55.947 "params": { 00:19:55.947 "name": "Nvme$subsystem", 00:19:55.947 "trtype": "$TEST_TRANSPORT", 00:19:55.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.947 "adrfam": "ipv4", 00:19:55.947 "trsvcid": "$NVMF_PORT", 00:19:55.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.947 "hdgst": ${hdgst:-false}, 00:19:55.947 "ddgst": ${ddgst:-false} 00:19:55.947 }, 00:19:55.947 "method": "bdev_nvme_attach_controller" 00:19:55.947 } 00:19:55.947 EOF 00:19:55.947 )") 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:55.947 15:38:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:55.947 "params": { 00:19:55.947 "name": "Nvme1", 00:19:55.947 "trtype": "tcp", 00:19:55.947 "traddr": "10.0.0.2", 00:19:55.947 "adrfam": "ipv4", 00:19:55.947 "trsvcid": "4420", 00:19:55.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.947 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.947 "hdgst": false, 00:19:55.947 "ddgst": false 00:19:55.947 }, 00:19:55.947 "method": "bdev_nvme_attach_controller" 00:19:55.947 }' 00:19:55.947 [2024-05-15 15:38:08.937173] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:19:55.947 [2024-05-15 15:38:08.937280] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1321906 ] 00:19:55.947 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.947 [2024-05-15 15:38:08.976028] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:55.947 [2024-05-15 15:38:09.010041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:56.204 [2024-05-15 15:38:09.101333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.204 [2024-05-15 15:38:09.101360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.204 [2024-05-15 15:38:09.101364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.204 I/O targets: 00:19:56.204 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:56.204 00:19:56.204 00:19:56.204 CUnit - A unit testing framework for C - Version 2.1-3 00:19:56.204 http://cunit.sourceforge.net/ 00:19:56.204 00:19:56.204 00:19:56.204 Suite: bdevio tests on: Nvme1n1 00:19:56.462 Test: blockdev write read block ...passed 00:19:56.462 Test: blockdev write zeroes read block ...passed 00:19:56.462 Test: blockdev write zeroes read no split ...passed 00:19:56.462 Test: blockdev write zeroes read split ...passed 00:19:56.462 Test: blockdev write zeroes read split partial ...passed 00:19:56.462 Test: blockdev reset ...[2024-05-15 15:38:09.447466] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:56.462 [2024-05-15 15:38:09.447581] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b52e0 (9): Bad file descriptor 00:19:56.719 [2024-05-15 15:38:09.582895] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:56.719 passed 00:19:56.719 Test: blockdev write read 8 blocks ...passed 00:19:56.719 Test: blockdev write read size > 128k ...passed 00:19:56.719 Test: blockdev write read invalid size ...passed 00:19:56.719 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:56.719 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:56.719 Test: blockdev write read max offset ...passed 00:19:56.719 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:56.719 Test: blockdev writev readv 8 blocks ...passed 00:19:56.719 Test: blockdev writev readv 30 x 1block ...passed 00:19:56.719 Test: blockdev writev readv block ...passed 00:19:56.719 Test: blockdev writev readv size > 128k ...passed 00:19:56.719 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:56.719 Test: blockdev comparev and writev ...[2024-05-15 15:38:09.796888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.719 [2024-05-15 15:38:09.796926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.719 [2024-05-15 15:38:09.796951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.719 [2024-05-15 15:38:09.796968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.719 [2024-05-15 15:38:09.797306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.719 [2024-05-15 15:38:09.797338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:56.719 [2024-05-15 15:38:09.797361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.719 [2024-05-15 15:38:09.797377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:56.719 [2024-05-15 15:38:09.797716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.719 [2024-05-15 15:38:09.797740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:56.719 [2024-05-15 15:38:09.797761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.719 [2024-05-15 15:38:09.797776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:56.719 [2024-05-15 15:38:09.798109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.719 [2024-05-15 15:38:09.798133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:56.719 [2024-05-15 15:38:09.798154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.719 [2024-05-15 15:38:09.798169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:56.977 passed 00:19:56.977 Test: blockdev nvme passthru rw ...passed 00:19:56.977 Test: blockdev nvme passthru vendor specific ...[2024-05-15 15:38:09.881545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:56.977 [2024-05-15 15:38:09.881571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:56.977 [2024-05-15 15:38:09.881743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:56.977 [2024-05-15 15:38:09.881766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:56.977 [2024-05-15 15:38:09.881935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:56.977 [2024-05-15 15:38:09.881957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:56.977 [2024-05-15 15:38:09.882120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:56.977 [2024-05-15 15:38:09.882143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:56.977 passed 00:19:56.977 Test: blockdev nvme admin passthru ...passed 00:19:56.977 Test: blockdev copy ...passed 00:19:56.977 00:19:56.977 Run Summary: Type Total Ran Passed Failed Inactive 00:19:56.977 suites 1 1 n/a 0 0 00:19:56.977 tests 23 23 23 0 0 00:19:56.977 asserts 152 152 152 0 n/a 00:19:56.977 00:19:56.977 Elapsed time = 1.316 seconds 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:57.235 rmmod nvme_tcp 00:19:57.235 rmmod nvme_fabrics 00:19:57.235 rmmod nvme_keyring 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1321878 ']' 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1321878 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 1321878 ']' 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 1321878 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1321878 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1321878' 00:19:57.235 killing process with pid 1321878 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 1321878 00:19:57.235 [2024-05-15 15:38:10.236548] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:57.235 15:38:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 1321878 00:19:57.493 15:38:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:57.493 15:38:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:57.493 15:38:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:57.493 15:38:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:57.493 15:38:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:57.493 15:38:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.493 15:38:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.494 15:38:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.022 15:38:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:00.022 00:20:00.022 real 0m6.769s 00:20:00.022 user 0m10.277s 00:20:00.022 sys 0m2.426s 00:20:00.022 15:38:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:00.022 15:38:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:00.022 ************************************ 00:20:00.022 END TEST nvmf_bdevio 00:20:00.022 ************************************ 00:20:00.022 15:38:12 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:00.022 15:38:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:00.022 15:38:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:00.022 15:38:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:00.022 ************************************ 00:20:00.022 START TEST nvmf_auth_target 00:20:00.022 ************************************ 00:20:00.022 15:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:00.022 * Looking for test storage... 00:20:00.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:00.022 15:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:00.022 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:00.022 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.022 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.022 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.022 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.022 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.022 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.022 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.022 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.022 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.022 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.022 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:00.022 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:00.022 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.022 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.022 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:00.022 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.022 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:00.022 15:38:12 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.022 15:38:12 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.022 15:38:12 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.022 15:38:12 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:20:00.023 15:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:02.551 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:02.551 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:02.551 Found net devices under 0000:09:00.0: cvl_0_0 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:02.551 Found net devices under 0000:09:00.1: cvl_0_1 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:02.551 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:02.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:02.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:20:02.552 00:20:02.552 --- 10.0.0.2 ping statistics --- 00:20:02.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.552 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:02.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:02.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:20:02.552 00:20:02.552 --- 10.0.0.1 ping statistics --- 00:20:02.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.552 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1324381 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1324381 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 1324381 ']' 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:02.552 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.810 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:02.810 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:20:02.810 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:02.810 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:02.810 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.810 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.810 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=1324415 00:20:02.810 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:02.810 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:02.810 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:20:02.810 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:02.810 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:02.810 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:02.810 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:20:02.810 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:02.810 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:02.810 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=23d22c342ddb752c524a7a81e74ed70deb07272ae666e0f9 00:20:02.810 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:02.810 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Jqq 00:20:02.810 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 23d22c342ddb752c524a7a81e74ed70deb07272ae666e0f9 0 00:20:02.810 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 23d22c342ddb752c524a7a81e74ed70deb07272ae666e0f9 0 00:20:02.810 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:02.810 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=23d22c342ddb752c524a7a81e74ed70deb07272ae666e0f9 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Jqq 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Jqq 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.Jqq 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=371259c284b9cd900046154745849d43 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Zot 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 371259c284b9cd900046154745849d43 1 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 371259c284b9cd900046154745849d43 1 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=371259c284b9cd900046154745849d43 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Zot 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Zot 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.Zot 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3e906892f1a41d2cf31a6d84fc893b6ec3cbe3222ed6bf17 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.MYR 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3e906892f1a41d2cf31a6d84fc893b6ec3cbe3222ed6bf17 2 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3e906892f1a41d2cf31a6d84fc893b6ec3cbe3222ed6bf17 2 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3e906892f1a41d2cf31a6d84fc893b6ec3cbe3222ed6bf17 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.MYR 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.MYR 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.MYR 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=97914b718604261f5a3ba73b987296f8fb3cd48edd16b204915a9ccfd0c2828b 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.9OG 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 97914b718604261f5a3ba73b987296f8fb3cd48edd16b204915a9ccfd0c2828b 3 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 97914b718604261f5a3ba73b987296f8fb3cd48edd16b204915a9ccfd0c2828b 3 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=97914b718604261f5a3ba73b987296f8fb3cd48edd16b204915a9ccfd0c2828b 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.9OG 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.9OG 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.9OG 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 1324381 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 1324381 ']' 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:02.811 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.069 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:03.069 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:20:03.069 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 1324415 /var/tmp/host.sock 00:20:03.069 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 1324415 ']' 00:20:03.069 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:20:03.069 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:03.069 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:03.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:03.069 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:03.069 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.327 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:03.327 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:20:03.327 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:20:03.327 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.327 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.327 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.327 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:20:03.327 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Jqq 00:20:03.327 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.327 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.327 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.327 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Jqq 00:20:03.585 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Jqq 00:20:03.585 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:20:03.585 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Zot 00:20:03.585 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.585 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.585 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.585 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Zot 00:20:03.585 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Zot 00:20:03.842 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:20:03.842 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.MYR 00:20:03.842 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.842 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.842 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.842 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.MYR 00:20:03.842 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.MYR 00:20:04.407 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:20:04.407 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.9OG 00:20:04.407 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.407 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.407 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.407 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.9OG 00:20:04.407 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.9OG 00:20:04.665 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:20:04.665 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.665 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:04.665 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:04.665 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:04.924 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:20:04.924 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:04.924 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:04.924 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:04.924 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:04.924 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:20:04.924 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.924 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.924 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.924 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:04.924 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:05.182 00:20:05.182 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:05.182 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:05.182 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.440 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.440 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.440 15:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.440 15:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.440 15:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.440 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:05.440 { 00:20:05.440 "cntlid": 1, 00:20:05.440 "qid": 0, 00:20:05.440 "state": "enabled", 00:20:05.440 "listen_address": { 00:20:05.440 "trtype": "TCP", 00:20:05.440 "adrfam": "IPv4", 00:20:05.440 "traddr": "10.0.0.2", 00:20:05.440 "trsvcid": "4420" 00:20:05.440 }, 00:20:05.440 "peer_address": { 00:20:05.440 "trtype": "TCP", 00:20:05.440 "adrfam": "IPv4", 00:20:05.440 "traddr": "10.0.0.1", 00:20:05.440 "trsvcid": "55096" 00:20:05.440 }, 00:20:05.440 "auth": { 00:20:05.440 "state": "completed", 00:20:05.440 "digest": "sha256", 00:20:05.440 "dhgroup": "null" 00:20:05.440 } 00:20:05.440 } 00:20:05.440 ]' 00:20:05.440 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:05.440 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.440 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:05.440 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:05.440 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:05.440 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.440 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.440 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.697 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjNkMjJjMzQyZGRiNzUyYzUyNGE3YTgxZTc0ZWQ3MGRlYjA3MjcyYWU2NjZlMGY5h2547w==: 00:20:06.630 15:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.630 15:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:06.630 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.630 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.630 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.630 15:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:06.630 15:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:06.630 15:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:07.196 15:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:20:07.196 15:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:07.196 15:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:07.196 15:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:07.196 15:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:07.196 15:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:07.196 15:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.196 15:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.196 15:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.196 15:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:07.196 15:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:07.453 00:20:07.453 15:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:07.453 15:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:07.454 15:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.711 15:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.711 15:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.711 15:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.711 15:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.711 15:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.711 15:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:07.711 { 00:20:07.711 "cntlid": 3, 00:20:07.711 "qid": 0, 00:20:07.711 "state": "enabled", 00:20:07.711 "listen_address": { 00:20:07.711 "trtype": "TCP", 00:20:07.711 "adrfam": "IPv4", 00:20:07.711 "traddr": "10.0.0.2", 00:20:07.711 "trsvcid": "4420" 00:20:07.711 }, 00:20:07.711 "peer_address": { 00:20:07.711 "trtype": "TCP", 00:20:07.711 "adrfam": "IPv4", 00:20:07.711 "traddr": "10.0.0.1", 00:20:07.711 "trsvcid": "55114" 00:20:07.711 }, 00:20:07.711 "auth": { 00:20:07.711 "state": "completed", 00:20:07.711 "digest": "sha256", 00:20:07.711 "dhgroup": "null" 00:20:07.711 } 00:20:07.711 } 00:20:07.711 ]' 00:20:07.711 15:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:07.711 15:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.711 15:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:07.711 15:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:07.712 15:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:07.712 15:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.712 15:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.712 15:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.969 15:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MzcxMjU5YzI4NGI5Y2Q5MDAwNDYxNTQ3NDU4NDlkNDOjdI4W: 00:20:08.902 15:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.902 15:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:08.902 15:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.902 15:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.165 15:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.165 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:09.165 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:09.165 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:09.165 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:20:09.165 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:09.165 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:09.165 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:09.165 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:09.165 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:20:09.165 15:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.165 15:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.470 15:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.470 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:09.470 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:09.728 00:20:09.728 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:09.728 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.728 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:09.728 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.728 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.728 15:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.728 15:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.728 15:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.728 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:09.728 { 00:20:09.728 "cntlid": 5, 00:20:09.728 "qid": 0, 00:20:09.728 "state": "enabled", 00:20:09.728 "listen_address": { 00:20:09.728 "trtype": "TCP", 00:20:09.728 "adrfam": "IPv4", 00:20:09.728 "traddr": "10.0.0.2", 00:20:09.728 "trsvcid": "4420" 00:20:09.728 }, 00:20:09.728 "peer_address": { 00:20:09.728 "trtype": "TCP", 00:20:09.728 "adrfam": "IPv4", 00:20:09.728 "traddr": "10.0.0.1", 00:20:09.728 "trsvcid": "38042" 00:20:09.728 }, 00:20:09.728 "auth": { 00:20:09.728 "state": "completed", 00:20:09.728 "digest": "sha256", 00:20:09.728 "dhgroup": "null" 00:20:09.728 } 00:20:09.728 } 00:20:09.728 ]' 00:20:09.728 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:09.985 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.985 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:09.985 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:09.985 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:09.985 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.985 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.985 15:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.242 15:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:M2U5MDY4OTJmMWE0MWQyY2YzMWE2ZDg0ZmM4OTNiNmVjM2NiZTMyMjJlZDZiZjE3Sabsew==: 00:20:11.173 15:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.173 15:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:11.173 15:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.173 15:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.173 15:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.173 15:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:11.173 15:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:11.173 15:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:11.431 15:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:20:11.431 15:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:11.431 15:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:11.431 15:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:11.431 15:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:11.431 15:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:11.431 15:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.431 15:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.431 15:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.431 15:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:11.431 15:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:11.688 00:20:11.688 15:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:11.688 15:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.688 15:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:11.944 15:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.944 15:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.944 15:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.944 15:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.944 15:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.944 15:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:11.944 { 00:20:11.944 "cntlid": 7, 00:20:11.944 "qid": 0, 00:20:11.944 "state": "enabled", 00:20:11.944 "listen_address": { 00:20:11.944 "trtype": "TCP", 00:20:11.944 "adrfam": "IPv4", 00:20:11.944 "traddr": "10.0.0.2", 00:20:11.944 "trsvcid": "4420" 00:20:11.944 }, 00:20:11.944 "peer_address": { 00:20:11.944 "trtype": "TCP", 00:20:11.944 "adrfam": "IPv4", 00:20:11.944 "traddr": "10.0.0.1", 00:20:11.944 "trsvcid": "38060" 00:20:11.944 }, 00:20:11.944 "auth": { 00:20:11.944 "state": "completed", 00:20:11.944 "digest": "sha256", 00:20:11.944 "dhgroup": "null" 00:20:11.944 } 00:20:11.944 } 00:20:11.944 ]' 00:20:11.944 15:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:11.944 15:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.944 15:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:11.944 15:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:11.944 15:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:12.202 15:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.202 15:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.202 15:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.459 15:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:OTc5MTRiNzE4NjA0MjYxZjVhM2JhNzNiOTg3Mjk2ZjhmYjNjZDQ4ZWRkMTZiMjA0OTE1YTljY2ZkMGMyODI4YlOHlEQ=: 00:20:13.391 15:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.391 15:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:13.391 15:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.391 15:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.391 15:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.391 15:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.391 15:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:13.391 15:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:13.391 15:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:13.391 15:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:20:13.391 15:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:13.391 15:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:13.391 15:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:13.391 15:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:13.391 15:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:20:13.391 15:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.391 15:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.391 15:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.391 15:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:13.391 15:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:13.957 00:20:13.957 15:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:13.957 15:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:13.957 15:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.957 15:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.957 15:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.957 15:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.957 15:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.957 15:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.957 15:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:13.957 { 00:20:13.957 "cntlid": 9, 00:20:13.957 "qid": 0, 00:20:13.957 "state": "enabled", 00:20:13.957 "listen_address": { 00:20:13.957 "trtype": "TCP", 00:20:13.957 "adrfam": "IPv4", 00:20:13.957 "traddr": "10.0.0.2", 00:20:13.957 "trsvcid": "4420" 00:20:13.957 }, 00:20:13.957 "peer_address": { 00:20:13.957 "trtype": "TCP", 00:20:13.957 "adrfam": "IPv4", 00:20:13.957 "traddr": "10.0.0.1", 00:20:13.957 "trsvcid": "38090" 00:20:13.957 }, 00:20:13.957 "auth": { 00:20:13.957 "state": "completed", 00:20:13.957 "digest": "sha256", 00:20:13.957 "dhgroup": "ffdhe2048" 00:20:13.957 } 00:20:13.957 } 00:20:13.957 ]' 00:20:13.957 15:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:14.214 15:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.214 15:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:14.214 15:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:14.214 15:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:14.214 15:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.215 15:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.215 15:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.472 15:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjNkMjJjMzQyZGRiNzUyYzUyNGE3YTgxZTc0ZWQ3MGRlYjA3MjcyYWU2NjZlMGY5h2547w==: 00:20:15.405 15:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.405 15:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:15.405 15:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.405 15:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.405 15:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.405 15:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:15.405 15:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:15.405 15:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:15.662 15:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:20:15.662 15:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:15.662 15:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:15.662 15:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:15.662 15:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:15.662 15:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:15.662 15:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.662 15:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.662 15:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.662 15:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:15.662 15:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:15.920 00:20:15.920 15:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:15.920 15:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:15.920 15:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.177 15:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.177 15:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.177 15:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.177 15:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.177 15:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.177 15:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:16.177 { 00:20:16.177 "cntlid": 11, 00:20:16.177 "qid": 0, 00:20:16.177 "state": "enabled", 00:20:16.177 "listen_address": { 00:20:16.177 "trtype": "TCP", 00:20:16.177 "adrfam": "IPv4", 00:20:16.177 "traddr": "10.0.0.2", 00:20:16.177 "trsvcid": "4420" 00:20:16.177 }, 00:20:16.177 "peer_address": { 00:20:16.177 "trtype": "TCP", 00:20:16.177 "adrfam": "IPv4", 00:20:16.177 "traddr": "10.0.0.1", 00:20:16.177 "trsvcid": "38120" 00:20:16.177 }, 00:20:16.177 "auth": { 00:20:16.177 "state": "completed", 00:20:16.177 "digest": "sha256", 00:20:16.177 "dhgroup": "ffdhe2048" 00:20:16.177 } 00:20:16.177 } 00:20:16.177 ]' 00:20:16.177 15:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:16.177 15:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.177 15:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:16.177 15:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:16.177 15:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:16.177 15:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.177 15:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.177 15:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.434 15:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MzcxMjU5YzI4NGI5Y2Q5MDAwNDYxNTQ3NDU4NDlkNDOjdI4W: 00:20:17.365 15:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.365 15:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:17.365 15:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.365 15:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.623 15:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.623 15:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:17.623 15:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:17.623 15:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:17.623 15:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:20:17.623 15:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:17.623 15:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:17.623 15:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:17.623 15:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:17.623 15:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:20:17.623 15:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.623 15:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.623 15:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.623 15:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:17.623 15:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:18.188 00:20:18.188 15:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:18.188 15:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:18.188 15:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.188 15:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.446 15:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.446 15:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.446 15:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.446 15:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.446 15:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:18.446 { 00:20:18.446 "cntlid": 13, 00:20:18.446 "qid": 0, 00:20:18.446 "state": "enabled", 00:20:18.446 "listen_address": { 00:20:18.446 "trtype": "TCP", 00:20:18.446 "adrfam": "IPv4", 00:20:18.446 "traddr": "10.0.0.2", 00:20:18.446 "trsvcid": "4420" 00:20:18.446 }, 00:20:18.446 "peer_address": { 00:20:18.446 "trtype": "TCP", 00:20:18.446 "adrfam": "IPv4", 00:20:18.446 "traddr": "10.0.0.1", 00:20:18.446 "trsvcid": "38130" 00:20:18.446 }, 00:20:18.446 "auth": { 00:20:18.446 "state": "completed", 00:20:18.446 "digest": "sha256", 00:20:18.446 "dhgroup": "ffdhe2048" 00:20:18.446 } 00:20:18.446 } 00:20:18.446 ]' 00:20:18.446 15:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:18.446 15:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.446 15:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:18.446 15:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:18.446 15:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:18.446 15:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.446 15:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.446 15:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.703 15:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:M2U5MDY4OTJmMWE0MWQyY2YzMWE2ZDg0ZmM4OTNiNmVjM2NiZTMyMjJlZDZiZjE3Sabsew==: 00:20:19.634 15:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.634 15:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:19.634 15:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.634 15:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.634 15:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.634 15:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:19.634 15:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:19.634 15:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:19.892 15:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:20:19.892 15:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:19.892 15:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:19.892 15:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:19.892 15:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:19.892 15:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:19.892 15:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.892 15:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.892 15:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.892 15:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:19.892 15:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:20.150 00:20:20.150 15:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:20.150 15:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:20.150 15:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.407 15:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.407 15:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.408 15:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.408 15:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.408 15:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.408 15:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:20.408 { 00:20:20.408 "cntlid": 15, 00:20:20.408 "qid": 0, 00:20:20.408 "state": "enabled", 00:20:20.408 "listen_address": { 00:20:20.408 "trtype": "TCP", 00:20:20.408 "adrfam": "IPv4", 00:20:20.408 "traddr": "10.0.0.2", 00:20:20.408 "trsvcid": "4420" 00:20:20.408 }, 00:20:20.408 "peer_address": { 00:20:20.408 "trtype": "TCP", 00:20:20.408 "adrfam": "IPv4", 00:20:20.408 "traddr": "10.0.0.1", 00:20:20.408 "trsvcid": "33014" 00:20:20.408 }, 00:20:20.408 "auth": { 00:20:20.408 "state": "completed", 00:20:20.408 "digest": "sha256", 00:20:20.408 "dhgroup": "ffdhe2048" 00:20:20.408 } 00:20:20.408 } 00:20:20.408 ]' 00:20:20.408 15:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:20.408 15:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.408 15:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:20.408 15:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:20.408 15:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:20.665 15:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.665 15:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.665 15:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.923 15:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:OTc5MTRiNzE4NjA0MjYxZjVhM2JhNzNiOTg3Mjk2ZjhmYjNjZDQ4ZWRkMTZiMjA0OTE1YTljY2ZkMGMyODI4YlOHlEQ=: 00:20:21.855 15:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.855 15:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:21.855 15:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.855 15:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.855 15:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.855 15:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:21.855 15:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:21.855 15:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:21.855 15:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:22.113 15:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:20:22.113 15:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:22.113 15:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:22.113 15:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:22.113 15:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:22.113 15:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:20:22.113 15:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.113 15:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.113 15:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.113 15:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:22.113 15:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:22.371 00:20:22.371 15:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:22.371 15:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:22.371 15:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.629 15:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.629 15:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.629 15:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.629 15:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.629 15:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.629 15:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:22.629 { 00:20:22.629 "cntlid": 17, 00:20:22.629 "qid": 0, 00:20:22.629 "state": "enabled", 00:20:22.629 "listen_address": { 00:20:22.629 "trtype": "TCP", 00:20:22.629 "adrfam": "IPv4", 00:20:22.629 "traddr": "10.0.0.2", 00:20:22.629 "trsvcid": "4420" 00:20:22.629 }, 00:20:22.629 "peer_address": { 00:20:22.629 "trtype": "TCP", 00:20:22.629 "adrfam": "IPv4", 00:20:22.629 "traddr": "10.0.0.1", 00:20:22.629 "trsvcid": "33054" 00:20:22.629 }, 00:20:22.629 "auth": { 00:20:22.629 "state": "completed", 00:20:22.629 "digest": "sha256", 00:20:22.629 "dhgroup": "ffdhe3072" 00:20:22.629 } 00:20:22.629 } 00:20:22.629 ]' 00:20:22.629 15:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:22.629 15:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.629 15:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:22.629 15:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:22.629 15:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:22.629 15:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.629 15:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.629 15:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.887 15:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjNkMjJjMzQyZGRiNzUyYzUyNGE3YTgxZTc0ZWQ3MGRlYjA3MjcyYWU2NjZlMGY5h2547w==: 00:20:24.260 15:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.260 15:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:24.260 15:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.260 15:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.260 15:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.260 15:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:24.260 15:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:24.260 15:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:24.260 15:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:20:24.260 15:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:24.260 15:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:24.260 15:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:24.260 15:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:24.260 15:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:24.260 15:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.260 15:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.260 15:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.260 15:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:24.260 15:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:24.517 00:20:24.517 15:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:24.517 15:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:24.517 15:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.774 15:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.774 15:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.774 15:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.774 15:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.774 15:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.774 15:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:24.774 { 00:20:24.774 "cntlid": 19, 00:20:24.774 "qid": 0, 00:20:24.774 "state": "enabled", 00:20:24.774 "listen_address": { 00:20:24.774 "trtype": "TCP", 00:20:24.774 "adrfam": "IPv4", 00:20:24.774 "traddr": "10.0.0.2", 00:20:24.774 "trsvcid": "4420" 00:20:24.774 }, 00:20:24.774 "peer_address": { 00:20:24.774 "trtype": "TCP", 00:20:24.774 "adrfam": "IPv4", 00:20:24.774 "traddr": "10.0.0.1", 00:20:24.774 "trsvcid": "33090" 00:20:24.774 }, 00:20:24.774 "auth": { 00:20:24.774 "state": "completed", 00:20:24.774 "digest": "sha256", 00:20:24.774 "dhgroup": "ffdhe3072" 00:20:24.774 } 00:20:24.774 } 00:20:24.774 ]' 00:20:24.774 15:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:24.774 15:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.774 15:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:25.032 15:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:25.032 15:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:25.032 15:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.032 15:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.032 15:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.288 15:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MzcxMjU5YzI4NGI5Y2Q5MDAwNDYxNTQ3NDU4NDlkNDOjdI4W: 00:20:26.259 15:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.259 15:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:26.259 15:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.259 15:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.259 15:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.259 15:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:26.259 15:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:26.259 15:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:26.516 15:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:20:26.516 15:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:26.516 15:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:26.516 15:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:26.516 15:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:26.516 15:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:20:26.516 15:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.516 15:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.516 15:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.516 15:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:26.516 15:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:26.774 00:20:26.774 15:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:26.774 15:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:26.774 15:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.032 15:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.032 15:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.032 15:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.032 15:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.032 15:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.032 15:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:27.032 { 00:20:27.032 "cntlid": 21, 00:20:27.032 "qid": 0, 00:20:27.032 "state": "enabled", 00:20:27.032 "listen_address": { 00:20:27.032 "trtype": "TCP", 00:20:27.032 "adrfam": "IPv4", 00:20:27.032 "traddr": "10.0.0.2", 00:20:27.032 "trsvcid": "4420" 00:20:27.032 }, 00:20:27.032 "peer_address": { 00:20:27.032 "trtype": "TCP", 00:20:27.032 "adrfam": "IPv4", 00:20:27.032 "traddr": "10.0.0.1", 00:20:27.032 "trsvcid": "33120" 00:20:27.032 }, 00:20:27.032 "auth": { 00:20:27.032 "state": "completed", 00:20:27.032 "digest": "sha256", 00:20:27.032 "dhgroup": "ffdhe3072" 00:20:27.032 } 00:20:27.032 } 00:20:27.032 ]' 00:20:27.032 15:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:27.032 15:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:27.032 15:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:27.032 15:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:27.032 15:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:27.032 15:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.032 15:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.032 15:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.289 15:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:M2U5MDY4OTJmMWE0MWQyY2YzMWE2ZDg0ZmM4OTNiNmVjM2NiZTMyMjJlZDZiZjE3Sabsew==: 00:20:28.221 15:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.221 15:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:28.221 15:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.221 15:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.221 15:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.221 15:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:28.221 15:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:28.221 15:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:28.479 15:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:20:28.479 15:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:28.479 15:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:28.479 15:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:28.479 15:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:28.479 15:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:28.479 15:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.479 15:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.479 15:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.479 15:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:28.479 15:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.045 00:20:29.045 15:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:29.045 15:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.045 15:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:29.045 15:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.045 15:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.045 15:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.045 15:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.302 15:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.302 15:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:29.302 { 00:20:29.302 "cntlid": 23, 00:20:29.302 "qid": 0, 00:20:29.302 "state": "enabled", 00:20:29.302 "listen_address": { 00:20:29.302 "trtype": "TCP", 00:20:29.302 "adrfam": "IPv4", 00:20:29.302 "traddr": "10.0.0.2", 00:20:29.302 "trsvcid": "4420" 00:20:29.302 }, 00:20:29.302 "peer_address": { 00:20:29.302 "trtype": "TCP", 00:20:29.302 "adrfam": "IPv4", 00:20:29.302 "traddr": "10.0.0.1", 00:20:29.302 "trsvcid": "54310" 00:20:29.302 }, 00:20:29.302 "auth": { 00:20:29.302 "state": "completed", 00:20:29.302 "digest": "sha256", 00:20:29.302 "dhgroup": "ffdhe3072" 00:20:29.302 } 00:20:29.302 } 00:20:29.302 ]' 00:20:29.302 15:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:29.302 15:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:29.302 15:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:29.302 15:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:29.302 15:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:29.302 15:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.302 15:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.302 15:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.559 15:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:OTc5MTRiNzE4NjA0MjYxZjVhM2JhNzNiOTg3Mjk2ZjhmYjNjZDQ4ZWRkMTZiMjA0OTE1YTljY2ZkMGMyODI4YlOHlEQ=: 00:20:30.491 15:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.491 15:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:30.491 15:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.491 15:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.491 15:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.491 15:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.491 15:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:30.491 15:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:30.491 15:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:30.747 15:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:20:30.747 15:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:30.747 15:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:30.747 15:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:30.747 15:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:30.747 15:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:20:30.747 15:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.747 15:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.747 15:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.747 15:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:30.747 15:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:31.005 00:20:31.005 15:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:31.005 15:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:31.005 15:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.262 15:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.262 15:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.262 15:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.262 15:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.262 15:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.262 15:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:31.262 { 00:20:31.262 "cntlid": 25, 00:20:31.262 "qid": 0, 00:20:31.262 "state": "enabled", 00:20:31.262 "listen_address": { 00:20:31.262 "trtype": "TCP", 00:20:31.262 "adrfam": "IPv4", 00:20:31.262 "traddr": "10.0.0.2", 00:20:31.262 "trsvcid": "4420" 00:20:31.262 }, 00:20:31.262 "peer_address": { 00:20:31.262 "trtype": "TCP", 00:20:31.262 "adrfam": "IPv4", 00:20:31.262 "traddr": "10.0.0.1", 00:20:31.262 "trsvcid": "54338" 00:20:31.262 }, 00:20:31.262 "auth": { 00:20:31.262 "state": "completed", 00:20:31.262 "digest": "sha256", 00:20:31.262 "dhgroup": "ffdhe4096" 00:20:31.262 } 00:20:31.262 } 00:20:31.262 ]' 00:20:31.262 15:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:31.519 15:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:31.519 15:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:31.519 15:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:31.519 15:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:31.519 15:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.519 15:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.519 15:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.777 15:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjNkMjJjMzQyZGRiNzUyYzUyNGE3YTgxZTc0ZWQ3MGRlYjA3MjcyYWU2NjZlMGY5h2547w==: 00:20:32.708 15:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.708 15:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:32.708 15:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.708 15:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.708 15:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.708 15:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:32.708 15:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:32.708 15:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:32.965 15:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:20:32.965 15:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:32.965 15:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:32.965 15:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:32.965 15:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:32.965 15:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:32.965 15:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.965 15:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.965 15:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.965 15:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:32.965 15:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:33.529 00:20:33.529 15:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:33.529 15:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:33.529 15:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.786 15:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.786 15:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.786 15:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.786 15:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.786 15:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.786 15:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:33.786 { 00:20:33.786 "cntlid": 27, 00:20:33.786 "qid": 0, 00:20:33.786 "state": "enabled", 00:20:33.786 "listen_address": { 00:20:33.786 "trtype": "TCP", 00:20:33.786 "adrfam": "IPv4", 00:20:33.786 "traddr": "10.0.0.2", 00:20:33.786 "trsvcid": "4420" 00:20:33.786 }, 00:20:33.787 "peer_address": { 00:20:33.787 "trtype": "TCP", 00:20:33.787 "adrfam": "IPv4", 00:20:33.787 "traddr": "10.0.0.1", 00:20:33.787 "trsvcid": "54354" 00:20:33.787 }, 00:20:33.787 "auth": { 00:20:33.787 "state": "completed", 00:20:33.787 "digest": "sha256", 00:20:33.787 "dhgroup": "ffdhe4096" 00:20:33.787 } 00:20:33.787 } 00:20:33.787 ]' 00:20:33.787 15:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:33.787 15:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:33.787 15:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:33.787 15:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:33.787 15:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:33.787 15:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.787 15:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.787 15:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.045 15:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MzcxMjU5YzI4NGI5Y2Q5MDAwNDYxNTQ3NDU4NDlkNDOjdI4W: 00:20:34.976 15:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.976 15:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:34.976 15:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.976 15:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.976 15:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.976 15:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:34.976 15:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:34.976 15:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:35.233 15:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:20:35.233 15:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:35.233 15:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:35.233 15:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:35.233 15:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:35.233 15:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:20:35.233 15:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.233 15:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.233 15:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.233 15:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:35.233 15:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:35.491 00:20:35.491 15:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:35.491 15:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.491 15:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:35.748 15:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.748 15:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.748 15:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.748 15:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.748 15:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.748 15:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:35.748 { 00:20:35.748 "cntlid": 29, 00:20:35.748 "qid": 0, 00:20:35.748 "state": "enabled", 00:20:35.748 "listen_address": { 00:20:35.748 "trtype": "TCP", 00:20:35.748 "adrfam": "IPv4", 00:20:35.748 "traddr": "10.0.0.2", 00:20:35.748 "trsvcid": "4420" 00:20:35.748 }, 00:20:35.748 "peer_address": { 00:20:35.748 "trtype": "TCP", 00:20:35.748 "adrfam": "IPv4", 00:20:35.748 "traddr": "10.0.0.1", 00:20:35.748 "trsvcid": "54376" 00:20:35.748 }, 00:20:35.748 "auth": { 00:20:35.748 "state": "completed", 00:20:35.748 "digest": "sha256", 00:20:35.748 "dhgroup": "ffdhe4096" 00:20:35.748 } 00:20:35.748 } 00:20:35.748 ]' 00:20:35.748 15:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:36.005 15:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:36.005 15:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:36.005 15:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:36.005 15:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:36.005 15:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.005 15:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.005 15:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.263 15:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:M2U5MDY4OTJmMWE0MWQyY2YzMWE2ZDg0ZmM4OTNiNmVjM2NiZTMyMjJlZDZiZjE3Sabsew==: 00:20:37.195 15:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.195 15:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:37.195 15:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.195 15:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.195 15:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.195 15:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:37.195 15:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:37.195 15:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:37.453 15:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:20:37.453 15:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:37.453 15:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:37.453 15:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:37.453 15:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:37.453 15:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:37.453 15:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.453 15:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.453 15:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.453 15:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:37.453 15:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:38.017 00:20:38.017 15:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:38.017 15:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:38.017 15:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.017 15:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.017 15:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.017 15:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.017 15:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.017 15:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.017 15:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:38.017 { 00:20:38.017 "cntlid": 31, 00:20:38.017 "qid": 0, 00:20:38.017 "state": "enabled", 00:20:38.017 "listen_address": { 00:20:38.017 "trtype": "TCP", 00:20:38.017 "adrfam": "IPv4", 00:20:38.017 "traddr": "10.0.0.2", 00:20:38.017 "trsvcid": "4420" 00:20:38.017 }, 00:20:38.017 "peer_address": { 00:20:38.017 "trtype": "TCP", 00:20:38.017 "adrfam": "IPv4", 00:20:38.017 "traddr": "10.0.0.1", 00:20:38.017 "trsvcid": "54406" 00:20:38.017 }, 00:20:38.017 "auth": { 00:20:38.017 "state": "completed", 00:20:38.017 "digest": "sha256", 00:20:38.017 "dhgroup": "ffdhe4096" 00:20:38.017 } 00:20:38.017 } 00:20:38.017 ]' 00:20:38.017 15:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:38.273 15:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:38.273 15:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:38.273 15:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:38.273 15:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:38.273 15:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.273 15:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.273 15:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.530 15:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:OTc5MTRiNzE4NjA0MjYxZjVhM2JhNzNiOTg3Mjk2ZjhmYjNjZDQ4ZWRkMTZiMjA0OTE1YTljY2ZkMGMyODI4YlOHlEQ=: 00:20:39.462 15:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.462 15:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:39.462 15:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.462 15:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.462 15:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.462 15:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:39.462 15:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:39.462 15:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:39.462 15:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:39.719 15:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:20:39.719 15:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:39.719 15:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:39.719 15:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:39.719 15:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:39.719 15:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:20:39.719 15:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.719 15:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.719 15:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.719 15:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:39.719 15:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:40.285 00:20:40.285 15:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:40.285 15:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:40.285 15:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.555 15:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.555 15:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.555 15:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.555 15:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.555 15:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.555 15:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:40.555 { 00:20:40.555 "cntlid": 33, 00:20:40.555 "qid": 0, 00:20:40.555 "state": "enabled", 00:20:40.555 "listen_address": { 00:20:40.555 "trtype": "TCP", 00:20:40.555 "adrfam": "IPv4", 00:20:40.555 "traddr": "10.0.0.2", 00:20:40.555 "trsvcid": "4420" 00:20:40.555 }, 00:20:40.555 "peer_address": { 00:20:40.555 "trtype": "TCP", 00:20:40.555 "adrfam": "IPv4", 00:20:40.555 "traddr": "10.0.0.1", 00:20:40.555 "trsvcid": "34684" 00:20:40.555 }, 00:20:40.555 "auth": { 00:20:40.555 "state": "completed", 00:20:40.555 "digest": "sha256", 00:20:40.555 "dhgroup": "ffdhe6144" 00:20:40.555 } 00:20:40.555 } 00:20:40.555 ]' 00:20:40.555 15:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:40.555 15:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:40.555 15:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:40.812 15:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:40.812 15:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:40.812 15:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.812 15:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.812 15:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.070 15:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjNkMjJjMzQyZGRiNzUyYzUyNGE3YTgxZTc0ZWQ3MGRlYjA3MjcyYWU2NjZlMGY5h2547w==: 00:20:42.057 15:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.057 15:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:42.057 15:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.057 15:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.057 15:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.057 15:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:42.057 15:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:42.057 15:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:42.314 15:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:20:42.314 15:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:42.314 15:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:42.314 15:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:42.314 15:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:42.314 15:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:42.315 15:38:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.315 15:38:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.315 15:38:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.315 15:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:42.315 15:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:42.879 00:20:42.879 15:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:42.879 15:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:42.879 15:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.136 15:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.136 15:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.136 15:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.136 15:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.136 15:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.136 15:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:43.136 { 00:20:43.136 "cntlid": 35, 00:20:43.136 "qid": 0, 00:20:43.136 "state": "enabled", 00:20:43.136 "listen_address": { 00:20:43.136 "trtype": "TCP", 00:20:43.136 "adrfam": "IPv4", 00:20:43.136 "traddr": "10.0.0.2", 00:20:43.136 "trsvcid": "4420" 00:20:43.136 }, 00:20:43.136 "peer_address": { 00:20:43.136 "trtype": "TCP", 00:20:43.136 "adrfam": "IPv4", 00:20:43.136 "traddr": "10.0.0.1", 00:20:43.136 "trsvcid": "34718" 00:20:43.136 }, 00:20:43.136 "auth": { 00:20:43.136 "state": "completed", 00:20:43.136 "digest": "sha256", 00:20:43.136 "dhgroup": "ffdhe6144" 00:20:43.136 } 00:20:43.136 } 00:20:43.136 ]' 00:20:43.136 15:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:43.136 15:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:43.136 15:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:43.136 15:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:43.136 15:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:43.393 15:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.393 15:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.393 15:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.650 15:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MzcxMjU5YzI4NGI5Y2Q5MDAwNDYxNTQ3NDU4NDlkNDOjdI4W: 00:20:44.583 15:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.583 15:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:44.583 15:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.583 15:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.583 15:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.583 15:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:44.583 15:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:44.583 15:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:44.841 15:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:20:44.841 15:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:44.841 15:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:44.841 15:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:44.841 15:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:44.841 15:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:20:44.841 15:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.841 15:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.841 15:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.841 15:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:44.841 15:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:45.405 00:20:45.405 15:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:45.405 15:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:45.405 15:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.663 15:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.663 15:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.663 15:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.663 15:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.663 15:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.663 15:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:45.663 { 00:20:45.663 "cntlid": 37, 00:20:45.663 "qid": 0, 00:20:45.663 "state": "enabled", 00:20:45.663 "listen_address": { 00:20:45.663 "trtype": "TCP", 00:20:45.663 "adrfam": "IPv4", 00:20:45.663 "traddr": "10.0.0.2", 00:20:45.663 "trsvcid": "4420" 00:20:45.663 }, 00:20:45.663 "peer_address": { 00:20:45.663 "trtype": "TCP", 00:20:45.663 "adrfam": "IPv4", 00:20:45.663 "traddr": "10.0.0.1", 00:20:45.663 "trsvcid": "34746" 00:20:45.663 }, 00:20:45.663 "auth": { 00:20:45.663 "state": "completed", 00:20:45.663 "digest": "sha256", 00:20:45.663 "dhgroup": "ffdhe6144" 00:20:45.663 } 00:20:45.663 } 00:20:45.663 ]' 00:20:45.663 15:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:45.663 15:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:45.663 15:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:45.663 15:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:45.663 15:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:45.663 15:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.663 15:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.663 15:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.921 15:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:M2U5MDY4OTJmMWE0MWQyY2YzMWE2ZDg0ZmM4OTNiNmVjM2NiZTMyMjJlZDZiZjE3Sabsew==: 00:20:46.854 15:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.854 15:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:46.854 15:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.854 15:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.854 15:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.854 15:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:46.854 15:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:46.854 15:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:47.111 15:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:20:47.111 15:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:47.111 15:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:47.111 15:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:47.111 15:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:47.111 15:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:47.111 15:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.111 15:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.369 15:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.369 15:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:47.369 15:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:47.933 00:20:47.933 15:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:47.933 15:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:47.933 15:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.191 15:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.191 15:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.191 15:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.191 15:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.191 15:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.191 15:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:48.191 { 00:20:48.191 "cntlid": 39, 00:20:48.191 "qid": 0, 00:20:48.191 "state": "enabled", 00:20:48.191 "listen_address": { 00:20:48.191 "trtype": "TCP", 00:20:48.191 "adrfam": "IPv4", 00:20:48.191 "traddr": "10.0.0.2", 00:20:48.191 "trsvcid": "4420" 00:20:48.191 }, 00:20:48.191 "peer_address": { 00:20:48.191 "trtype": "TCP", 00:20:48.191 "adrfam": "IPv4", 00:20:48.191 "traddr": "10.0.0.1", 00:20:48.191 "trsvcid": "34780" 00:20:48.191 }, 00:20:48.191 "auth": { 00:20:48.191 "state": "completed", 00:20:48.191 "digest": "sha256", 00:20:48.191 "dhgroup": "ffdhe6144" 00:20:48.191 } 00:20:48.191 } 00:20:48.191 ]' 00:20:48.191 15:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:48.191 15:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:48.191 15:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:48.191 15:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:48.191 15:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:48.191 15:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.191 15:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.191 15:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.449 15:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:OTc5MTRiNzE4NjA0MjYxZjVhM2JhNzNiOTg3Mjk2ZjhmYjNjZDQ4ZWRkMTZiMjA0OTE1YTljY2ZkMGMyODI4YlOHlEQ=: 00:20:49.379 15:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.379 15:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:49.379 15:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.379 15:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.379 15:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.379 15:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.379 15:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:49.379 15:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:49.379 15:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:49.637 15:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:20:49.637 15:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:49.637 15:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:49.637 15:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:49.637 15:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:49.637 15:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:20:49.637 15:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.637 15:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.637 15:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.637 15:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:49.637 15:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:50.569 00:20:50.569 15:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:50.569 15:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:50.569 15:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.826 15:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.826 15:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.826 15:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.826 15:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.826 15:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.826 15:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:50.826 { 00:20:50.826 "cntlid": 41, 00:20:50.826 "qid": 0, 00:20:50.826 "state": "enabled", 00:20:50.826 "listen_address": { 00:20:50.826 "trtype": "TCP", 00:20:50.826 "adrfam": "IPv4", 00:20:50.826 "traddr": "10.0.0.2", 00:20:50.826 "trsvcid": "4420" 00:20:50.826 }, 00:20:50.826 "peer_address": { 00:20:50.826 "trtype": "TCP", 00:20:50.826 "adrfam": "IPv4", 00:20:50.826 "traddr": "10.0.0.1", 00:20:50.826 "trsvcid": "34236" 00:20:50.826 }, 00:20:50.826 "auth": { 00:20:50.826 "state": "completed", 00:20:50.826 "digest": "sha256", 00:20:50.826 "dhgroup": "ffdhe8192" 00:20:50.826 } 00:20:50.826 } 00:20:50.826 ]' 00:20:50.826 15:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:50.826 15:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:50.826 15:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:50.826 15:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:50.826 15:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:50.826 15:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.826 15:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.826 15:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.084 15:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjNkMjJjMzQyZGRiNzUyYzUyNGE3YTgxZTc0ZWQ3MGRlYjA3MjcyYWU2NjZlMGY5h2547w==: 00:20:52.016 15:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.016 15:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:52.017 15:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.017 15:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.017 15:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.017 15:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:52.017 15:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:52.017 15:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:52.274 15:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:20:52.274 15:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:52.274 15:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:52.274 15:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:52.274 15:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:52.274 15:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:52.274 15:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.274 15:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.274 15:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.274 15:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:52.274 15:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:53.206 00:20:53.206 15:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:53.206 15:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:53.206 15:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.463 15:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.463 15:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.463 15:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.463 15:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.463 15:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.463 15:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:53.463 { 00:20:53.463 "cntlid": 43, 00:20:53.463 "qid": 0, 00:20:53.463 "state": "enabled", 00:20:53.463 "listen_address": { 00:20:53.463 "trtype": "TCP", 00:20:53.463 "adrfam": "IPv4", 00:20:53.463 "traddr": "10.0.0.2", 00:20:53.463 "trsvcid": "4420" 00:20:53.463 }, 00:20:53.463 "peer_address": { 00:20:53.463 "trtype": "TCP", 00:20:53.463 "adrfam": "IPv4", 00:20:53.463 "traddr": "10.0.0.1", 00:20:53.463 "trsvcid": "34280" 00:20:53.463 }, 00:20:53.463 "auth": { 00:20:53.463 "state": "completed", 00:20:53.463 "digest": "sha256", 00:20:53.463 "dhgroup": "ffdhe8192" 00:20:53.463 } 00:20:53.463 } 00:20:53.463 ]' 00:20:53.463 15:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:53.463 15:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:53.463 15:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:53.722 15:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:53.722 15:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:53.722 15:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.722 15:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.722 15:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.980 15:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MzcxMjU5YzI4NGI5Y2Q5MDAwNDYxNTQ3NDU4NDlkNDOjdI4W: 00:20:54.914 15:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.914 15:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:54.914 15:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.914 15:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.914 15:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.914 15:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:54.914 15:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:54.914 15:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:55.172 15:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:20:55.172 15:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:55.172 15:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:55.172 15:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:55.172 15:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:55.172 15:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:20:55.172 15:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.172 15:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.172 15:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.172 15:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:55.172 15:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:56.105 00:20:56.105 15:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:56.105 15:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:56.105 15:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.363 15:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.364 15:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.364 15:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.364 15:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.364 15:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.364 15:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:56.364 { 00:20:56.364 "cntlid": 45, 00:20:56.364 "qid": 0, 00:20:56.364 "state": "enabled", 00:20:56.364 "listen_address": { 00:20:56.364 "trtype": "TCP", 00:20:56.364 "adrfam": "IPv4", 00:20:56.364 "traddr": "10.0.0.2", 00:20:56.364 "trsvcid": "4420" 00:20:56.364 }, 00:20:56.364 "peer_address": { 00:20:56.364 "trtype": "TCP", 00:20:56.364 "adrfam": "IPv4", 00:20:56.364 "traddr": "10.0.0.1", 00:20:56.364 "trsvcid": "34296" 00:20:56.364 }, 00:20:56.364 "auth": { 00:20:56.364 "state": "completed", 00:20:56.364 "digest": "sha256", 00:20:56.364 "dhgroup": "ffdhe8192" 00:20:56.364 } 00:20:56.364 } 00:20:56.364 ]' 00:20:56.364 15:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:56.364 15:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:56.364 15:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:56.364 15:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:56.364 15:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:56.364 15:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.364 15:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.364 15:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.621 15:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:M2U5MDY4OTJmMWE0MWQyY2YzMWE2ZDg0ZmM4OTNiNmVjM2NiZTMyMjJlZDZiZjE3Sabsew==: 00:20:57.554 15:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.554 15:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:57.554 15:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.554 15:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.554 15:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.554 15:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:57.554 15:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:57.554 15:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:57.812 15:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:20:57.812 15:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:57.812 15:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:57.812 15:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:57.812 15:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:57.812 15:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:57.812 15:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.812 15:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.812 15:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.812 15:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.812 15:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.798 00:20:58.798 15:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:58.798 15:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:58.798 15:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.055 15:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.055 15:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.055 15:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.055 15:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.056 15:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.056 15:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:59.056 { 00:20:59.056 "cntlid": 47, 00:20:59.056 "qid": 0, 00:20:59.056 "state": "enabled", 00:20:59.056 "listen_address": { 00:20:59.056 "trtype": "TCP", 00:20:59.056 "adrfam": "IPv4", 00:20:59.056 "traddr": "10.0.0.2", 00:20:59.056 "trsvcid": "4420" 00:20:59.056 }, 00:20:59.056 "peer_address": { 00:20:59.056 "trtype": "TCP", 00:20:59.056 "adrfam": "IPv4", 00:20:59.056 "traddr": "10.0.0.1", 00:20:59.056 "trsvcid": "34310" 00:20:59.056 }, 00:20:59.056 "auth": { 00:20:59.056 "state": "completed", 00:20:59.056 "digest": "sha256", 00:20:59.056 "dhgroup": "ffdhe8192" 00:20:59.056 } 00:20:59.056 } 00:20:59.056 ]' 00:20:59.056 15:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:59.056 15:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:59.056 15:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:59.056 15:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:59.056 15:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:59.056 15:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.056 15:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.056 15:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.314 15:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:OTc5MTRiNzE4NjA0MjYxZjVhM2JhNzNiOTg3Mjk2ZjhmYjNjZDQ4ZWRkMTZiMjA0OTE1YTljY2ZkMGMyODI4YlOHlEQ=: 00:21:00.245 15:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.245 15:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:00.245 15:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.245 15:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.245 15:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.245 15:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:21:00.245 15:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.245 15:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:00.245 15:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:00.245 15:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:00.502 15:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:21:00.502 15:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:00.502 15:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:00.502 15:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:00.502 15:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:00.502 15:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:21:00.502 15:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.502 15:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.502 15:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.502 15:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:00.502 15:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:00.759 00:21:00.759 15:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:00.759 15:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.759 15:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:01.016 15:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.016 15:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.016 15:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.016 15:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.016 15:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.016 15:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:01.016 { 00:21:01.016 "cntlid": 49, 00:21:01.016 "qid": 0, 00:21:01.016 "state": "enabled", 00:21:01.016 "listen_address": { 00:21:01.016 "trtype": "TCP", 00:21:01.016 "adrfam": "IPv4", 00:21:01.016 "traddr": "10.0.0.2", 00:21:01.016 "trsvcid": "4420" 00:21:01.016 }, 00:21:01.016 "peer_address": { 00:21:01.016 "trtype": "TCP", 00:21:01.016 "adrfam": "IPv4", 00:21:01.016 "traddr": "10.0.0.1", 00:21:01.016 "trsvcid": "55102" 00:21:01.016 }, 00:21:01.016 "auth": { 00:21:01.016 "state": "completed", 00:21:01.016 "digest": "sha384", 00:21:01.016 "dhgroup": "null" 00:21:01.016 } 00:21:01.016 } 00:21:01.016 ]' 00:21:01.016 15:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:01.016 15:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.016 15:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:01.274 15:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:01.274 15:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:01.274 15:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.274 15:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.274 15:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.531 15:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjNkMjJjMzQyZGRiNzUyYzUyNGE3YTgxZTc0ZWQ3MGRlYjA3MjcyYWU2NjZlMGY5h2547w==: 00:21:02.460 15:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.460 15:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:02.460 15:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.460 15:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.460 15:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.460 15:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:02.460 15:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:02.460 15:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:02.717 15:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:21:02.717 15:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:02.717 15:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:02.717 15:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:02.717 15:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:02.717 15:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:21:02.717 15:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.717 15:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.717 15:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.717 15:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:02.717 15:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:02.973 00:21:02.973 15:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:02.973 15:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:02.973 15:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.230 15:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.230 15:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.230 15:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.230 15:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.230 15:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.230 15:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:03.230 { 00:21:03.230 "cntlid": 51, 00:21:03.230 "qid": 0, 00:21:03.230 "state": "enabled", 00:21:03.230 "listen_address": { 00:21:03.230 "trtype": "TCP", 00:21:03.230 "adrfam": "IPv4", 00:21:03.230 "traddr": "10.0.0.2", 00:21:03.230 "trsvcid": "4420" 00:21:03.230 }, 00:21:03.230 "peer_address": { 00:21:03.230 "trtype": "TCP", 00:21:03.230 "adrfam": "IPv4", 00:21:03.230 "traddr": "10.0.0.1", 00:21:03.230 "trsvcid": "55126" 00:21:03.230 }, 00:21:03.230 "auth": { 00:21:03.230 "state": "completed", 00:21:03.230 "digest": "sha384", 00:21:03.230 "dhgroup": "null" 00:21:03.230 } 00:21:03.230 } 00:21:03.230 ]' 00:21:03.230 15:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:03.230 15:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.230 15:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:03.230 15:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:03.230 15:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:03.230 15:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.230 15:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.230 15:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.487 15:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MzcxMjU5YzI4NGI5Y2Q5MDAwNDYxNTQ3NDU4NDlkNDOjdI4W: 00:21:04.418 15:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.418 15:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:04.418 15:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.418 15:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.418 15:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.418 15:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:04.418 15:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:04.418 15:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:04.675 15:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:21:04.675 15:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:04.675 15:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:04.675 15:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:04.675 15:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:04.675 15:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:21:04.675 15:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.675 15:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.675 15:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.675 15:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:04.675 15:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:04.932 00:21:04.932 15:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:04.932 15:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:04.932 15:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.189 15:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.189 15:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.189 15:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.189 15:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.189 15:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.189 15:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:05.189 { 00:21:05.189 "cntlid": 53, 00:21:05.189 "qid": 0, 00:21:05.189 "state": "enabled", 00:21:05.189 "listen_address": { 00:21:05.189 "trtype": "TCP", 00:21:05.189 "adrfam": "IPv4", 00:21:05.189 "traddr": "10.0.0.2", 00:21:05.189 "trsvcid": "4420" 00:21:05.189 }, 00:21:05.189 "peer_address": { 00:21:05.189 "trtype": "TCP", 00:21:05.189 "adrfam": "IPv4", 00:21:05.189 "traddr": "10.0.0.1", 00:21:05.189 "trsvcid": "55142" 00:21:05.189 }, 00:21:05.189 "auth": { 00:21:05.189 "state": "completed", 00:21:05.189 "digest": "sha384", 00:21:05.189 "dhgroup": "null" 00:21:05.189 } 00:21:05.189 } 00:21:05.189 ]' 00:21:05.189 15:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:05.446 15:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.446 15:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:05.446 15:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:05.446 15:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:05.446 15:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.446 15:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.446 15:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.703 15:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:M2U5MDY4OTJmMWE0MWQyY2YzMWE2ZDg0ZmM4OTNiNmVjM2NiZTMyMjJlZDZiZjE3Sabsew==: 00:21:06.634 15:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.634 15:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:06.634 15:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.634 15:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.634 15:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.634 15:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:06.634 15:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:06.634 15:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:06.892 15:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:21:06.892 15:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:06.892 15:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:06.892 15:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:06.892 15:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:06.892 15:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:06.892 15:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.892 15:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.892 15:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.892 15:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:06.892 15:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:07.150 00:21:07.407 15:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:07.407 15:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.407 15:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:07.407 15:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.665 15:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.665 15:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.665 15:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.665 15:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.665 15:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:07.665 { 00:21:07.665 "cntlid": 55, 00:21:07.665 "qid": 0, 00:21:07.665 "state": "enabled", 00:21:07.665 "listen_address": { 00:21:07.665 "trtype": "TCP", 00:21:07.665 "adrfam": "IPv4", 00:21:07.665 "traddr": "10.0.0.2", 00:21:07.665 "trsvcid": "4420" 00:21:07.665 }, 00:21:07.665 "peer_address": { 00:21:07.665 "trtype": "TCP", 00:21:07.665 "adrfam": "IPv4", 00:21:07.665 "traddr": "10.0.0.1", 00:21:07.665 "trsvcid": "55158" 00:21:07.665 }, 00:21:07.665 "auth": { 00:21:07.665 "state": "completed", 00:21:07.665 "digest": "sha384", 00:21:07.665 "dhgroup": "null" 00:21:07.665 } 00:21:07.665 } 00:21:07.665 ]' 00:21:07.665 15:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:07.665 15:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.665 15:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:07.665 15:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:07.665 15:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:07.665 15:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.665 15:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.665 15:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.923 15:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:OTc5MTRiNzE4NjA0MjYxZjVhM2JhNzNiOTg3Mjk2ZjhmYjNjZDQ4ZWRkMTZiMjA0OTE1YTljY2ZkMGMyODI4YlOHlEQ=: 00:21:08.852 15:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.852 15:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:08.852 15:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.852 15:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.852 15:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.852 15:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.852 15:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:08.852 15:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:08.852 15:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:09.110 15:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:21:09.110 15:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:09.110 15:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:09.110 15:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:09.110 15:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:09.110 15:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:21:09.110 15:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.110 15:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.110 15:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.110 15:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:09.110 15:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:09.368 00:21:09.625 15:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:09.625 15:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:09.625 15:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.625 15:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.625 15:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.625 15:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.625 15:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.882 15:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.883 15:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:09.883 { 00:21:09.883 "cntlid": 57, 00:21:09.883 "qid": 0, 00:21:09.883 "state": "enabled", 00:21:09.883 "listen_address": { 00:21:09.883 "trtype": "TCP", 00:21:09.883 "adrfam": "IPv4", 00:21:09.883 "traddr": "10.0.0.2", 00:21:09.883 "trsvcid": "4420" 00:21:09.883 }, 00:21:09.883 "peer_address": { 00:21:09.883 "trtype": "TCP", 00:21:09.883 "adrfam": "IPv4", 00:21:09.883 "traddr": "10.0.0.1", 00:21:09.883 "trsvcid": "54306" 00:21:09.883 }, 00:21:09.883 "auth": { 00:21:09.883 "state": "completed", 00:21:09.883 "digest": "sha384", 00:21:09.883 "dhgroup": "ffdhe2048" 00:21:09.883 } 00:21:09.883 } 00:21:09.883 ]' 00:21:09.883 15:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:09.883 15:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.883 15:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:09.883 15:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:09.883 15:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:09.883 15:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.883 15:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.883 15:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.140 15:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjNkMjJjMzQyZGRiNzUyYzUyNGE3YTgxZTc0ZWQ3MGRlYjA3MjcyYWU2NjZlMGY5h2547w==: 00:21:11.072 15:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.072 15:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:11.072 15:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.072 15:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.072 15:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.072 15:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:11.072 15:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:11.072 15:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:11.330 15:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:21:11.330 15:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:11.330 15:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:11.330 15:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:11.330 15:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:11.330 15:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:21:11.330 15:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.330 15:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.330 15:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.330 15:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:11.330 15:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:11.587 00:21:11.587 15:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:11.587 15:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:11.587 15:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.845 15:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.845 15:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.845 15:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.845 15:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.845 15:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.845 15:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:11.845 { 00:21:11.845 "cntlid": 59, 00:21:11.845 "qid": 0, 00:21:11.845 "state": "enabled", 00:21:11.845 "listen_address": { 00:21:11.845 "trtype": "TCP", 00:21:11.845 "adrfam": "IPv4", 00:21:11.845 "traddr": "10.0.0.2", 00:21:11.845 "trsvcid": "4420" 00:21:11.845 }, 00:21:11.845 "peer_address": { 00:21:11.845 "trtype": "TCP", 00:21:11.845 "adrfam": "IPv4", 00:21:11.845 "traddr": "10.0.0.1", 00:21:11.845 "trsvcid": "54316" 00:21:11.845 }, 00:21:11.845 "auth": { 00:21:11.845 "state": "completed", 00:21:11.845 "digest": "sha384", 00:21:11.845 "dhgroup": "ffdhe2048" 00:21:11.845 } 00:21:11.845 } 00:21:11.845 ]' 00:21:11.845 15:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:12.103 15:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:12.103 15:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:12.103 15:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:12.103 15:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:12.103 15:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.103 15:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.103 15:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.360 15:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MzcxMjU5YzI4NGI5Y2Q5MDAwNDYxNTQ3NDU4NDlkNDOjdI4W: 00:21:13.327 15:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.327 15:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:13.327 15:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.327 15:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.327 15:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.327 15:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:13.327 15:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:13.327 15:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:13.585 15:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:21:13.585 15:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:13.585 15:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:13.585 15:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:13.585 15:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:13.585 15:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:21:13.585 15:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.585 15:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.585 15:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.585 15:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:13.585 15:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:13.842 00:21:13.842 15:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:13.842 15:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:13.842 15:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.100 15:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.100 15:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.100 15:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.100 15:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.100 15:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.100 15:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:14.100 { 00:21:14.100 "cntlid": 61, 00:21:14.100 "qid": 0, 00:21:14.100 "state": "enabled", 00:21:14.100 "listen_address": { 00:21:14.100 "trtype": "TCP", 00:21:14.100 "adrfam": "IPv4", 00:21:14.100 "traddr": "10.0.0.2", 00:21:14.100 "trsvcid": "4420" 00:21:14.100 }, 00:21:14.100 "peer_address": { 00:21:14.100 "trtype": "TCP", 00:21:14.100 "adrfam": "IPv4", 00:21:14.100 "traddr": "10.0.0.1", 00:21:14.100 "trsvcid": "54346" 00:21:14.100 }, 00:21:14.100 "auth": { 00:21:14.100 "state": "completed", 00:21:14.100 "digest": "sha384", 00:21:14.100 "dhgroup": "ffdhe2048" 00:21:14.100 } 00:21:14.100 } 00:21:14.100 ]' 00:21:14.100 15:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:14.100 15:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.100 15:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:14.100 15:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:14.100 15:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:14.358 15:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.358 15:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.358 15:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.358 15:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:M2U5MDY4OTJmMWE0MWQyY2YzMWE2ZDg0ZmM4OTNiNmVjM2NiZTMyMjJlZDZiZjE3Sabsew==: 00:21:15.289 15:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.289 15:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:15.289 15:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.289 15:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.289 15:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.289 15:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:15.289 15:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:15.289 15:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:15.853 15:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:21:15.853 15:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:15.853 15:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:15.853 15:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:15.853 15:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:15.853 15:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:15.853 15:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.853 15:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.853 15:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.853 15:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:15.853 15:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:16.111 00:21:16.111 15:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:16.111 15:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:16.111 15:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.368 15:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.368 15:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.368 15:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.368 15:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.368 15:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.368 15:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:16.368 { 00:21:16.368 "cntlid": 63, 00:21:16.368 "qid": 0, 00:21:16.368 "state": "enabled", 00:21:16.368 "listen_address": { 00:21:16.368 "trtype": "TCP", 00:21:16.368 "adrfam": "IPv4", 00:21:16.368 "traddr": "10.0.0.2", 00:21:16.368 "trsvcid": "4420" 00:21:16.368 }, 00:21:16.368 "peer_address": { 00:21:16.368 "trtype": "TCP", 00:21:16.368 "adrfam": "IPv4", 00:21:16.368 "traddr": "10.0.0.1", 00:21:16.368 "trsvcid": "54378" 00:21:16.368 }, 00:21:16.368 "auth": { 00:21:16.368 "state": "completed", 00:21:16.368 "digest": "sha384", 00:21:16.368 "dhgroup": "ffdhe2048" 00:21:16.368 } 00:21:16.368 } 00:21:16.368 ]' 00:21:16.368 15:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:16.368 15:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.368 15:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:16.368 15:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:16.368 15:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:16.368 15:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.368 15:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.368 15:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.934 15:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:OTc5MTRiNzE4NjA0MjYxZjVhM2JhNzNiOTg3Mjk2ZjhmYjNjZDQ4ZWRkMTZiMjA0OTE1YTljY2ZkMGMyODI4YlOHlEQ=: 00:21:17.867 15:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.867 15:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:17.867 15:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.867 15:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.867 15:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.867 15:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.867 15:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:17.867 15:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:17.867 15:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:17.867 15:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:21:17.867 15:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:17.867 15:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:17.867 15:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:17.867 15:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:17.867 15:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:21:17.867 15:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.867 15:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.867 15:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.867 15:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:17.868 15:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:18.433 00:21:18.433 15:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:18.433 15:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:18.433 15:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.433 15:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.433 15:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.433 15:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.433 15:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.691 15:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.691 15:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:18.691 { 00:21:18.691 "cntlid": 65, 00:21:18.691 "qid": 0, 00:21:18.691 "state": "enabled", 00:21:18.691 "listen_address": { 00:21:18.691 "trtype": "TCP", 00:21:18.691 "adrfam": "IPv4", 00:21:18.691 "traddr": "10.0.0.2", 00:21:18.691 "trsvcid": "4420" 00:21:18.691 }, 00:21:18.691 "peer_address": { 00:21:18.691 "trtype": "TCP", 00:21:18.691 "adrfam": "IPv4", 00:21:18.691 "traddr": "10.0.0.1", 00:21:18.691 "trsvcid": "52974" 00:21:18.691 }, 00:21:18.691 "auth": { 00:21:18.691 "state": "completed", 00:21:18.691 "digest": "sha384", 00:21:18.691 "dhgroup": "ffdhe3072" 00:21:18.691 } 00:21:18.691 } 00:21:18.691 ]' 00:21:18.691 15:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:18.691 15:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.691 15:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:18.691 15:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:18.691 15:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:18.691 15:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.691 15:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.691 15:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.949 15:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjNkMjJjMzQyZGRiNzUyYzUyNGE3YTgxZTc0ZWQ3MGRlYjA3MjcyYWU2NjZlMGY5h2547w==: 00:21:19.881 15:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.881 15:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:19.881 15:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.881 15:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.881 15:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.881 15:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:19.881 15:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:19.881 15:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:20.138 15:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:21:20.138 15:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:20.138 15:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:20.138 15:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:20.138 15:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:20.138 15:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:21:20.138 15:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.138 15:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.138 15:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.138 15:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:20.138 15:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:20.395 00:21:20.395 15:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:20.395 15:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.395 15:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:20.651 15:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.651 15:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.651 15:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.651 15:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.651 15:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.651 15:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:20.651 { 00:21:20.651 "cntlid": 67, 00:21:20.651 "qid": 0, 00:21:20.651 "state": "enabled", 00:21:20.651 "listen_address": { 00:21:20.651 "trtype": "TCP", 00:21:20.651 "adrfam": "IPv4", 00:21:20.651 "traddr": "10.0.0.2", 00:21:20.651 "trsvcid": "4420" 00:21:20.651 }, 00:21:20.651 "peer_address": { 00:21:20.651 "trtype": "TCP", 00:21:20.651 "adrfam": "IPv4", 00:21:20.651 "traddr": "10.0.0.1", 00:21:20.651 "trsvcid": "53006" 00:21:20.651 }, 00:21:20.651 "auth": { 00:21:20.651 "state": "completed", 00:21:20.651 "digest": "sha384", 00:21:20.651 "dhgroup": "ffdhe3072" 00:21:20.651 } 00:21:20.651 } 00:21:20.651 ]' 00:21:20.651 15:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:20.651 15:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:20.651 15:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:20.907 15:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:20.907 15:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:20.907 15:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.907 15:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.907 15:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.164 15:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MzcxMjU5YzI4NGI5Y2Q5MDAwNDYxNTQ3NDU4NDlkNDOjdI4W: 00:21:22.096 15:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.096 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:22.096 15:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.096 15:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.096 15:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.096 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:22.096 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:22.096 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:22.354 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:21:22.354 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:22.354 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:22.354 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:22.354 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:22.354 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:21:22.354 15:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.354 15:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.354 15:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.354 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:22.354 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:22.612 00:21:22.612 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:22.612 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:22.612 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.870 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.870 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.870 15:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.870 15:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.870 15:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.870 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:22.870 { 00:21:22.870 "cntlid": 69, 00:21:22.870 "qid": 0, 00:21:22.870 "state": "enabled", 00:21:22.870 "listen_address": { 00:21:22.870 "trtype": "TCP", 00:21:22.870 "adrfam": "IPv4", 00:21:22.870 "traddr": "10.0.0.2", 00:21:22.870 "trsvcid": "4420" 00:21:22.870 }, 00:21:22.870 "peer_address": { 00:21:22.870 "trtype": "TCP", 00:21:22.870 "adrfam": "IPv4", 00:21:22.870 "traddr": "10.0.0.1", 00:21:22.870 "trsvcid": "53030" 00:21:22.870 }, 00:21:22.870 "auth": { 00:21:22.870 "state": "completed", 00:21:22.870 "digest": "sha384", 00:21:22.870 "dhgroup": "ffdhe3072" 00:21:22.870 } 00:21:22.870 } 00:21:22.870 ]' 00:21:22.870 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:22.870 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.870 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:22.870 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:22.870 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:22.870 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.870 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.870 15:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.435 15:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:M2U5MDY4OTJmMWE0MWQyY2YzMWE2ZDg0ZmM4OTNiNmVjM2NiZTMyMjJlZDZiZjE3Sabsew==: 00:21:23.999 15:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.256 15:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:24.256 15:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.256 15:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.256 15:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.256 15:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:24.256 15:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:24.256 15:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:24.514 15:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:21:24.514 15:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:24.514 15:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:24.514 15:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:24.514 15:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:24.514 15:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:24.514 15:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.514 15:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.514 15:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.514 15:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:24.514 15:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:24.772 00:21:24.772 15:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:24.772 15:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:24.772 15:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.030 15:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.030 15:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.030 15:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.030 15:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.030 15:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.030 15:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:25.030 { 00:21:25.030 "cntlid": 71, 00:21:25.030 "qid": 0, 00:21:25.030 "state": "enabled", 00:21:25.030 "listen_address": { 00:21:25.030 "trtype": "TCP", 00:21:25.030 "adrfam": "IPv4", 00:21:25.030 "traddr": "10.0.0.2", 00:21:25.030 "trsvcid": "4420" 00:21:25.030 }, 00:21:25.030 "peer_address": { 00:21:25.030 "trtype": "TCP", 00:21:25.030 "adrfam": "IPv4", 00:21:25.030 "traddr": "10.0.0.1", 00:21:25.030 "trsvcid": "53062" 00:21:25.030 }, 00:21:25.030 "auth": { 00:21:25.030 "state": "completed", 00:21:25.030 "digest": "sha384", 00:21:25.030 "dhgroup": "ffdhe3072" 00:21:25.030 } 00:21:25.030 } 00:21:25.030 ]' 00:21:25.030 15:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:25.030 15:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:25.030 15:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:25.031 15:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:25.031 15:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:25.288 15:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.288 15:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.288 15:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.546 15:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:OTc5MTRiNzE4NjA0MjYxZjVhM2JhNzNiOTg3Mjk2ZjhmYjNjZDQ4ZWRkMTZiMjA0OTE1YTljY2ZkMGMyODI4YlOHlEQ=: 00:21:26.480 15:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.480 15:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:26.480 15:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.480 15:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.480 15:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.480 15:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:26.480 15:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:26.480 15:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:26.480 15:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:26.738 15:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:21:26.738 15:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:26.738 15:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:26.738 15:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:26.738 15:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:26.738 15:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:21:26.738 15:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.738 15:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.738 15:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.738 15:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:26.738 15:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:26.995 00:21:26.995 15:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:26.995 15:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.995 15:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:27.252 15:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.252 15:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.252 15:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.252 15:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.252 15:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.252 15:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:27.252 { 00:21:27.252 "cntlid": 73, 00:21:27.252 "qid": 0, 00:21:27.252 "state": "enabled", 00:21:27.252 "listen_address": { 00:21:27.252 "trtype": "TCP", 00:21:27.252 "adrfam": "IPv4", 00:21:27.252 "traddr": "10.0.0.2", 00:21:27.252 "trsvcid": "4420" 00:21:27.252 }, 00:21:27.252 "peer_address": { 00:21:27.252 "trtype": "TCP", 00:21:27.252 "adrfam": "IPv4", 00:21:27.252 "traddr": "10.0.0.1", 00:21:27.252 "trsvcid": "53100" 00:21:27.253 }, 00:21:27.253 "auth": { 00:21:27.253 "state": "completed", 00:21:27.253 "digest": "sha384", 00:21:27.253 "dhgroup": "ffdhe4096" 00:21:27.253 } 00:21:27.253 } 00:21:27.253 ]' 00:21:27.253 15:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:27.253 15:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.253 15:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:27.253 15:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:27.253 15:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:27.510 15:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.510 15:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.510 15:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.789 15:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjNkMjJjMzQyZGRiNzUyYzUyNGE3YTgxZTc0ZWQ3MGRlYjA3MjcyYWU2NjZlMGY5h2547w==: 00:21:28.738 15:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.738 15:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:28.738 15:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.738 15:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.738 15:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.738 15:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:28.738 15:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:28.738 15:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:28.738 15:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:21:28.738 15:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:28.738 15:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:28.738 15:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:28.738 15:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:28.738 15:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:21:28.738 15:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.738 15:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.738 15:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.738 15:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:28.739 15:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:29.304 00:21:29.304 15:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:29.304 15:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.304 15:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:29.561 15:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.561 15:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.561 15:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.561 15:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.561 15:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.561 15:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:29.561 { 00:21:29.561 "cntlid": 75, 00:21:29.561 "qid": 0, 00:21:29.561 "state": "enabled", 00:21:29.561 "listen_address": { 00:21:29.561 "trtype": "TCP", 00:21:29.561 "adrfam": "IPv4", 00:21:29.561 "traddr": "10.0.0.2", 00:21:29.561 "trsvcid": "4420" 00:21:29.561 }, 00:21:29.561 "peer_address": { 00:21:29.561 "trtype": "TCP", 00:21:29.561 "adrfam": "IPv4", 00:21:29.561 "traddr": "10.0.0.1", 00:21:29.561 "trsvcid": "37538" 00:21:29.561 }, 00:21:29.561 "auth": { 00:21:29.561 "state": "completed", 00:21:29.561 "digest": "sha384", 00:21:29.561 "dhgroup": "ffdhe4096" 00:21:29.561 } 00:21:29.561 } 00:21:29.561 ]' 00:21:29.561 15:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:29.561 15:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.561 15:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:29.561 15:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:29.561 15:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:29.561 15:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.561 15:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.561 15:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.818 15:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MzcxMjU5YzI4NGI5Y2Q5MDAwNDYxNTQ3NDU4NDlkNDOjdI4W: 00:21:30.749 15:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.749 15:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:30.749 15:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.750 15:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.750 15:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.750 15:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:30.750 15:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:30.750 15:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:31.007 15:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:21:31.007 15:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:31.007 15:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:31.007 15:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:31.007 15:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:31.007 15:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:21:31.007 15:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.007 15:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.007 15:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.007 15:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:31.007 15:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:31.264 00:21:31.521 15:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:31.521 15:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:31.521 15:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.778 15:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.778 15:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.778 15:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.778 15:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.778 15:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.778 15:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:31.778 { 00:21:31.778 "cntlid": 77, 00:21:31.778 "qid": 0, 00:21:31.778 "state": "enabled", 00:21:31.778 "listen_address": { 00:21:31.778 "trtype": "TCP", 00:21:31.778 "adrfam": "IPv4", 00:21:31.778 "traddr": "10.0.0.2", 00:21:31.778 "trsvcid": "4420" 00:21:31.778 }, 00:21:31.778 "peer_address": { 00:21:31.778 "trtype": "TCP", 00:21:31.778 "adrfam": "IPv4", 00:21:31.778 "traddr": "10.0.0.1", 00:21:31.778 "trsvcid": "37570" 00:21:31.778 }, 00:21:31.778 "auth": { 00:21:31.778 "state": "completed", 00:21:31.778 "digest": "sha384", 00:21:31.778 "dhgroup": "ffdhe4096" 00:21:31.778 } 00:21:31.778 } 00:21:31.778 ]' 00:21:31.778 15:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:31.778 15:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.778 15:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:31.778 15:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:31.778 15:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:31.779 15:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.779 15:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.779 15:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.036 15:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:M2U5MDY4OTJmMWE0MWQyY2YzMWE2ZDg0ZmM4OTNiNmVjM2NiZTMyMjJlZDZiZjE3Sabsew==: 00:21:32.970 15:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.970 15:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:32.970 15:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.970 15:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.970 15:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.970 15:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:32.970 15:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:32.970 15:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:33.228 15:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:21:33.228 15:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:33.228 15:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:33.228 15:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:33.228 15:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:33.228 15:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:33.228 15:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.228 15:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.228 15:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.228 15:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:33.228 15:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:33.485 00:21:33.485 15:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:33.485 15:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:33.485 15:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.743 15:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.743 15:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.743 15:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.743 15:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.743 15:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.743 15:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:33.743 { 00:21:33.743 "cntlid": 79, 00:21:33.743 "qid": 0, 00:21:33.743 "state": "enabled", 00:21:33.743 "listen_address": { 00:21:33.743 "trtype": "TCP", 00:21:33.743 "adrfam": "IPv4", 00:21:33.743 "traddr": "10.0.0.2", 00:21:33.743 "trsvcid": "4420" 00:21:33.743 }, 00:21:33.743 "peer_address": { 00:21:33.743 "trtype": "TCP", 00:21:33.743 "adrfam": "IPv4", 00:21:33.743 "traddr": "10.0.0.1", 00:21:33.743 "trsvcid": "37594" 00:21:33.743 }, 00:21:33.743 "auth": { 00:21:33.743 "state": "completed", 00:21:33.743 "digest": "sha384", 00:21:33.743 "dhgroup": "ffdhe4096" 00:21:33.743 } 00:21:33.743 } 00:21:33.743 ]' 00:21:33.743 15:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:33.743 15:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:33.743 15:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:34.000 15:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:34.000 15:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:34.000 15:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.000 15:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.000 15:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.257 15:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:OTc5MTRiNzE4NjA0MjYxZjVhM2JhNzNiOTg3Mjk2ZjhmYjNjZDQ4ZWRkMTZiMjA0OTE1YTljY2ZkMGMyODI4YlOHlEQ=: 00:21:35.186 15:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.187 15:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:35.187 15:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.187 15:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.187 15:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.187 15:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.187 15:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:35.187 15:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:35.187 15:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:35.443 15:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:21:35.443 15:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:35.443 15:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:35.443 15:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:35.443 15:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:35.443 15:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:21:35.443 15:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.443 15:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.443 15:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.443 15:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:35.443 15:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:36.006 00:21:36.007 15:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:36.007 15:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:36.007 15:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.264 15:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.264 15:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.264 15:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.264 15:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.264 15:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.264 15:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:36.264 { 00:21:36.264 "cntlid": 81, 00:21:36.264 "qid": 0, 00:21:36.264 "state": "enabled", 00:21:36.264 "listen_address": { 00:21:36.264 "trtype": "TCP", 00:21:36.264 "adrfam": "IPv4", 00:21:36.264 "traddr": "10.0.0.2", 00:21:36.264 "trsvcid": "4420" 00:21:36.264 }, 00:21:36.264 "peer_address": { 00:21:36.264 "trtype": "TCP", 00:21:36.264 "adrfam": "IPv4", 00:21:36.264 "traddr": "10.0.0.1", 00:21:36.264 "trsvcid": "37626" 00:21:36.264 }, 00:21:36.264 "auth": { 00:21:36.264 "state": "completed", 00:21:36.264 "digest": "sha384", 00:21:36.264 "dhgroup": "ffdhe6144" 00:21:36.264 } 00:21:36.264 } 00:21:36.264 ]' 00:21:36.264 15:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:36.264 15:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:36.264 15:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:36.264 15:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:36.264 15:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:36.521 15:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.521 15:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.521 15:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.778 15:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjNkMjJjMzQyZGRiNzUyYzUyNGE3YTgxZTc0ZWQ3MGRlYjA3MjcyYWU2NjZlMGY5h2547w==: 00:21:37.728 15:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.728 15:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:37.728 15:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.728 15:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.728 15:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.728 15:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:37.728 15:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:37.728 15:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:37.728 15:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:21:37.728 15:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:37.728 15:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:37.728 15:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:37.728 15:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:37.728 15:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:21:37.728 15:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.728 15:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.728 15:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.728 15:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:37.728 15:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:38.292 00:21:38.292 15:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:38.293 15:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:38.293 15:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.550 15:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.550 15:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.550 15:39:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.550 15:39:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.550 15:39:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.550 15:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:38.550 { 00:21:38.550 "cntlid": 83, 00:21:38.550 "qid": 0, 00:21:38.550 "state": "enabled", 00:21:38.550 "listen_address": { 00:21:38.550 "trtype": "TCP", 00:21:38.550 "adrfam": "IPv4", 00:21:38.550 "traddr": "10.0.0.2", 00:21:38.550 "trsvcid": "4420" 00:21:38.550 }, 00:21:38.550 "peer_address": { 00:21:38.550 "trtype": "TCP", 00:21:38.550 "adrfam": "IPv4", 00:21:38.550 "traddr": "10.0.0.1", 00:21:38.550 "trsvcid": "37640" 00:21:38.550 }, 00:21:38.550 "auth": { 00:21:38.550 "state": "completed", 00:21:38.550 "digest": "sha384", 00:21:38.550 "dhgroup": "ffdhe6144" 00:21:38.550 } 00:21:38.550 } 00:21:38.550 ]' 00:21:38.550 15:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:38.807 15:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:38.807 15:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:38.807 15:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:38.807 15:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:38.807 15:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.807 15:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.807 15:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.064 15:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MzcxMjU5YzI4NGI5Y2Q5MDAwNDYxNTQ3NDU4NDlkNDOjdI4W: 00:21:39.994 15:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.994 15:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:39.994 15:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.995 15:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.995 15:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.995 15:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:39.995 15:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:39.995 15:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:40.251 15:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:21:40.251 15:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:40.251 15:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:40.251 15:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:40.251 15:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:40.251 15:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:21:40.251 15:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.251 15:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.251 15:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.251 15:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:40.251 15:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:40.815 00:21:40.815 15:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:40.815 15:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:40.815 15:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.074 15:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.074 15:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.074 15:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.074 15:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.074 15:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.074 15:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:41.074 { 00:21:41.074 "cntlid": 85, 00:21:41.074 "qid": 0, 00:21:41.074 "state": "enabled", 00:21:41.074 "listen_address": { 00:21:41.074 "trtype": "TCP", 00:21:41.074 "adrfam": "IPv4", 00:21:41.074 "traddr": "10.0.0.2", 00:21:41.074 "trsvcid": "4420" 00:21:41.074 }, 00:21:41.074 "peer_address": { 00:21:41.074 "trtype": "TCP", 00:21:41.074 "adrfam": "IPv4", 00:21:41.074 "traddr": "10.0.0.1", 00:21:41.074 "trsvcid": "37768" 00:21:41.074 }, 00:21:41.074 "auth": { 00:21:41.074 "state": "completed", 00:21:41.074 "digest": "sha384", 00:21:41.074 "dhgroup": "ffdhe6144" 00:21:41.074 } 00:21:41.074 } 00:21:41.074 ]' 00:21:41.074 15:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:41.074 15:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:41.074 15:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:41.074 15:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:41.074 15:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:41.074 15:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.074 15:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.074 15:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.331 15:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:M2U5MDY4OTJmMWE0MWQyY2YzMWE2ZDg0ZmM4OTNiNmVjM2NiZTMyMjJlZDZiZjE3Sabsew==: 00:21:42.703 15:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.703 15:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:42.703 15:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.703 15:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.703 15:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.703 15:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:42.703 15:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:42.703 15:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:42.703 15:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:21:42.703 15:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:42.703 15:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:42.703 15:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:42.703 15:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:42.703 15:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:42.703 15:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.703 15:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.703 15:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.703 15:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:42.703 15:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:43.308 00:21:43.308 15:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:43.308 15:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:43.308 15:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.566 15:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.566 15:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.566 15:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.566 15:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.566 15:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.566 15:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:43.566 { 00:21:43.566 "cntlid": 87, 00:21:43.566 "qid": 0, 00:21:43.566 "state": "enabled", 00:21:43.566 "listen_address": { 00:21:43.566 "trtype": "TCP", 00:21:43.566 "adrfam": "IPv4", 00:21:43.566 "traddr": "10.0.0.2", 00:21:43.566 "trsvcid": "4420" 00:21:43.566 }, 00:21:43.566 "peer_address": { 00:21:43.566 "trtype": "TCP", 00:21:43.566 "adrfam": "IPv4", 00:21:43.566 "traddr": "10.0.0.1", 00:21:43.566 "trsvcid": "37790" 00:21:43.566 }, 00:21:43.566 "auth": { 00:21:43.566 "state": "completed", 00:21:43.566 "digest": "sha384", 00:21:43.566 "dhgroup": "ffdhe6144" 00:21:43.566 } 00:21:43.566 } 00:21:43.566 ]' 00:21:43.566 15:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:43.566 15:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:43.566 15:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:43.566 15:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:43.566 15:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:43.566 15:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.566 15:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.566 15:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.823 15:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:OTc5MTRiNzE4NjA0MjYxZjVhM2JhNzNiOTg3Mjk2ZjhmYjNjZDQ4ZWRkMTZiMjA0OTE1YTljY2ZkMGMyODI4YlOHlEQ=: 00:21:44.755 15:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.755 15:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:44.755 15:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.755 15:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.755 15:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.755 15:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:44.755 15:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:44.755 15:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:44.755 15:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:45.012 15:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:21:45.012 15:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:45.012 15:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:45.012 15:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:45.012 15:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:45.012 15:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:21:45.012 15:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.012 15:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.012 15:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.012 15:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:45.012 15:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:45.943 00:21:45.943 15:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:45.943 15:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.943 15:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:46.201 15:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.201 15:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.201 15:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.201 15:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.201 15:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.201 15:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:46.201 { 00:21:46.201 "cntlid": 89, 00:21:46.201 "qid": 0, 00:21:46.201 "state": "enabled", 00:21:46.201 "listen_address": { 00:21:46.201 "trtype": "TCP", 00:21:46.201 "adrfam": "IPv4", 00:21:46.201 "traddr": "10.0.0.2", 00:21:46.201 "trsvcid": "4420" 00:21:46.201 }, 00:21:46.201 "peer_address": { 00:21:46.201 "trtype": "TCP", 00:21:46.201 "adrfam": "IPv4", 00:21:46.201 "traddr": "10.0.0.1", 00:21:46.201 "trsvcid": "37812" 00:21:46.201 }, 00:21:46.201 "auth": { 00:21:46.201 "state": "completed", 00:21:46.201 "digest": "sha384", 00:21:46.201 "dhgroup": "ffdhe8192" 00:21:46.201 } 00:21:46.201 } 00:21:46.201 ]' 00:21:46.201 15:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:46.201 15:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:46.201 15:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:46.201 15:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:46.201 15:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:46.457 15:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.457 15:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.457 15:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.714 15:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjNkMjJjMzQyZGRiNzUyYzUyNGE3YTgxZTc0ZWQ3MGRlYjA3MjcyYWU2NjZlMGY5h2547w==: 00:21:47.646 15:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.646 15:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:47.646 15:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.646 15:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.646 15:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.646 15:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:47.646 15:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:47.646 15:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:47.904 15:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:21:47.904 15:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:47.904 15:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:47.904 15:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:47.904 15:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:47.904 15:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:21:47.904 15:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.904 15:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.904 15:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.904 15:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:47.904 15:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:48.835 00:21:48.835 15:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:48.835 15:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.835 15:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:49.092 15:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.092 15:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.092 15:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.092 15:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.092 15:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.092 15:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:49.092 { 00:21:49.092 "cntlid": 91, 00:21:49.092 "qid": 0, 00:21:49.092 "state": "enabled", 00:21:49.092 "listen_address": { 00:21:49.092 "trtype": "TCP", 00:21:49.092 "adrfam": "IPv4", 00:21:49.092 "traddr": "10.0.0.2", 00:21:49.092 "trsvcid": "4420" 00:21:49.092 }, 00:21:49.092 "peer_address": { 00:21:49.092 "trtype": "TCP", 00:21:49.092 "adrfam": "IPv4", 00:21:49.092 "traddr": "10.0.0.1", 00:21:49.092 "trsvcid": "47584" 00:21:49.092 }, 00:21:49.092 "auth": { 00:21:49.092 "state": "completed", 00:21:49.092 "digest": "sha384", 00:21:49.092 "dhgroup": "ffdhe8192" 00:21:49.092 } 00:21:49.092 } 00:21:49.092 ]' 00:21:49.092 15:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:49.092 15:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:49.092 15:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:49.092 15:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:49.092 15:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:49.092 15:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.092 15:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.092 15:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.349 15:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MzcxMjU5YzI4NGI5Y2Q5MDAwNDYxNTQ3NDU4NDlkNDOjdI4W: 00:21:50.281 15:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.281 15:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:50.281 15:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.281 15:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.281 15:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.281 15:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:50.281 15:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:50.281 15:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:50.537 15:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:21:50.537 15:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:50.537 15:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:50.537 15:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:50.537 15:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:50.537 15:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:21:50.537 15:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.537 15:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.794 15:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.794 15:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:50.794 15:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:51.726 00:21:51.726 15:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:51.726 15:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.726 15:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:51.726 15:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.726 15:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.726 15:40:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.726 15:40:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.726 15:40:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.726 15:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:51.726 { 00:21:51.726 "cntlid": 93, 00:21:51.726 "qid": 0, 00:21:51.726 "state": "enabled", 00:21:51.726 "listen_address": { 00:21:51.726 "trtype": "TCP", 00:21:51.726 "adrfam": "IPv4", 00:21:51.726 "traddr": "10.0.0.2", 00:21:51.726 "trsvcid": "4420" 00:21:51.726 }, 00:21:51.726 "peer_address": { 00:21:51.726 "trtype": "TCP", 00:21:51.726 "adrfam": "IPv4", 00:21:51.726 "traddr": "10.0.0.1", 00:21:51.726 "trsvcid": "47606" 00:21:51.726 }, 00:21:51.726 "auth": { 00:21:51.726 "state": "completed", 00:21:51.726 "digest": "sha384", 00:21:51.726 "dhgroup": "ffdhe8192" 00:21:51.726 } 00:21:51.726 } 00:21:51.726 ]' 00:21:51.726 15:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:51.726 15:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:51.726 15:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:51.982 15:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:51.982 15:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:51.982 15:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.982 15:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.982 15:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.239 15:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:M2U5MDY4OTJmMWE0MWQyY2YzMWE2ZDg0ZmM4OTNiNmVjM2NiZTMyMjJlZDZiZjE3Sabsew==: 00:21:53.170 15:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.170 15:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:53.170 15:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.171 15:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.171 15:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.171 15:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:53.171 15:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:53.171 15:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:53.428 15:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:21:53.428 15:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:53.428 15:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:53.428 15:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:53.428 15:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:53.428 15:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:53.428 15:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.428 15:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.428 15:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.428 15:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:53.428 15:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:54.359 00:21:54.359 15:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:54.359 15:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:54.359 15:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.616 15:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.616 15:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.616 15:40:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.616 15:40:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.616 15:40:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.616 15:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:54.616 { 00:21:54.616 "cntlid": 95, 00:21:54.616 "qid": 0, 00:21:54.616 "state": "enabled", 00:21:54.616 "listen_address": { 00:21:54.616 "trtype": "TCP", 00:21:54.616 "adrfam": "IPv4", 00:21:54.616 "traddr": "10.0.0.2", 00:21:54.616 "trsvcid": "4420" 00:21:54.616 }, 00:21:54.616 "peer_address": { 00:21:54.616 "trtype": "TCP", 00:21:54.616 "adrfam": "IPv4", 00:21:54.616 "traddr": "10.0.0.1", 00:21:54.616 "trsvcid": "47646" 00:21:54.616 }, 00:21:54.616 "auth": { 00:21:54.616 "state": "completed", 00:21:54.616 "digest": "sha384", 00:21:54.616 "dhgroup": "ffdhe8192" 00:21:54.616 } 00:21:54.616 } 00:21:54.616 ]' 00:21:54.616 15:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:54.616 15:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:54.616 15:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:54.616 15:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:54.616 15:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:54.616 15:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.616 15:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.616 15:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.873 15:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:OTc5MTRiNzE4NjA0MjYxZjVhM2JhNzNiOTg3Mjk2ZjhmYjNjZDQ4ZWRkMTZiMjA0OTE1YTljY2ZkMGMyODI4YlOHlEQ=: 00:21:55.805 15:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.805 15:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:55.805 15:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.805 15:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.805 15:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.805 15:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:21:55.805 15:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:55.805 15:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:55.805 15:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:55.805 15:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:56.062 15:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:21:56.062 15:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:56.062 15:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:56.062 15:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:56.062 15:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:56.062 15:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:21:56.062 15:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.062 15:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.062 15:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.062 15:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:56.062 15:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:56.319 00:21:56.576 15:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:56.576 15:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.576 15:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:56.576 15:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.576 15:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.576 15:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.576 15:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.833 15:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.833 15:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:56.833 { 00:21:56.833 "cntlid": 97, 00:21:56.833 "qid": 0, 00:21:56.833 "state": "enabled", 00:21:56.833 "listen_address": { 00:21:56.833 "trtype": "TCP", 00:21:56.833 "adrfam": "IPv4", 00:21:56.833 "traddr": "10.0.0.2", 00:21:56.833 "trsvcid": "4420" 00:21:56.833 }, 00:21:56.833 "peer_address": { 00:21:56.833 "trtype": "TCP", 00:21:56.833 "adrfam": "IPv4", 00:21:56.833 "traddr": "10.0.0.1", 00:21:56.833 "trsvcid": "47686" 00:21:56.833 }, 00:21:56.833 "auth": { 00:21:56.833 "state": "completed", 00:21:56.833 "digest": "sha512", 00:21:56.833 "dhgroup": "null" 00:21:56.833 } 00:21:56.833 } 00:21:56.833 ]' 00:21:56.833 15:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:56.833 15:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.833 15:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:56.833 15:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:56.833 15:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:56.833 15:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.833 15:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.833 15:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.090 15:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjNkMjJjMzQyZGRiNzUyYzUyNGE3YTgxZTc0ZWQ3MGRlYjA3MjcyYWU2NjZlMGY5h2547w==: 00:21:58.022 15:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.022 15:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:58.022 15:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.022 15:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.022 15:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.022 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:58.022 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:58.022 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:58.299 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:21:58.299 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:58.299 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:58.299 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:58.299 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:58.299 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:21:58.299 15:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.299 15:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.299 15:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.299 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:58.299 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:58.571 00:21:58.571 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:58.571 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:58.571 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.827 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.827 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.827 15:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.827 15:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.827 15:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.827 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:58.827 { 00:21:58.827 "cntlid": 99, 00:21:58.827 "qid": 0, 00:21:58.827 "state": "enabled", 00:21:58.827 "listen_address": { 00:21:58.827 "trtype": "TCP", 00:21:58.827 "adrfam": "IPv4", 00:21:58.827 "traddr": "10.0.0.2", 00:21:58.827 "trsvcid": "4420" 00:21:58.827 }, 00:21:58.827 "peer_address": { 00:21:58.827 "trtype": "TCP", 00:21:58.828 "adrfam": "IPv4", 00:21:58.828 "traddr": "10.0.0.1", 00:21:58.828 "trsvcid": "47776" 00:21:58.828 }, 00:21:58.828 "auth": { 00:21:58.828 "state": "completed", 00:21:58.828 "digest": "sha512", 00:21:58.828 "dhgroup": "null" 00:21:58.828 } 00:21:58.828 } 00:21:58.828 ]' 00:21:58.828 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:58.828 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.828 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:58.828 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:58.828 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:59.085 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.085 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.085 15:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.342 15:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MzcxMjU5YzI4NGI5Y2Q5MDAwNDYxNTQ3NDU4NDlkNDOjdI4W: 00:22:00.274 15:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.274 15:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:00.274 15:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.274 15:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.274 15:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.274 15:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:00.274 15:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:00.274 15:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:00.274 15:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:22:00.274 15:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:00.274 15:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:00.274 15:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:00.274 15:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:00.274 15:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:22:00.274 15:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.274 15:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.274 15:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.274 15:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:00.274 15:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:00.839 00:22:00.839 15:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:00.839 15:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:00.839 15:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.839 15:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.839 15:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.839 15:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.839 15:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.839 15:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.839 15:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:00.839 { 00:22:00.839 "cntlid": 101, 00:22:00.839 "qid": 0, 00:22:00.839 "state": "enabled", 00:22:00.839 "listen_address": { 00:22:00.839 "trtype": "TCP", 00:22:00.839 "adrfam": "IPv4", 00:22:00.839 "traddr": "10.0.0.2", 00:22:00.839 "trsvcid": "4420" 00:22:00.839 }, 00:22:00.839 "peer_address": { 00:22:00.839 "trtype": "TCP", 00:22:00.839 "adrfam": "IPv4", 00:22:00.839 "traddr": "10.0.0.1", 00:22:00.839 "trsvcid": "47800" 00:22:00.839 }, 00:22:00.839 "auth": { 00:22:00.839 "state": "completed", 00:22:00.839 "digest": "sha512", 00:22:00.839 "dhgroup": "null" 00:22:00.839 } 00:22:00.839 } 00:22:00.839 ]' 00:22:00.839 15:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:01.097 15:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.097 15:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:01.097 15:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:22:01.097 15:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:01.097 15:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.097 15:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.097 15:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.354 15:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:M2U5MDY4OTJmMWE0MWQyY2YzMWE2ZDg0ZmM4OTNiNmVjM2NiZTMyMjJlZDZiZjE3Sabsew==: 00:22:02.287 15:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.287 15:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:02.287 15:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.287 15:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.287 15:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.287 15:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:02.287 15:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:02.287 15:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:02.545 15:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:22:02.545 15:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:02.545 15:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:02.545 15:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:02.545 15:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:02.545 15:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:22:02.545 15:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.545 15:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.545 15:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.545 15:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:02.545 15:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:02.803 00:22:02.803 15:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:02.803 15:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.803 15:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:03.060 15:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.060 15:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.060 15:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.060 15:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.060 15:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.060 15:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:03.060 { 00:22:03.060 "cntlid": 103, 00:22:03.060 "qid": 0, 00:22:03.060 "state": "enabled", 00:22:03.060 "listen_address": { 00:22:03.060 "trtype": "TCP", 00:22:03.060 "adrfam": "IPv4", 00:22:03.060 "traddr": "10.0.0.2", 00:22:03.060 "trsvcid": "4420" 00:22:03.060 }, 00:22:03.060 "peer_address": { 00:22:03.060 "trtype": "TCP", 00:22:03.060 "adrfam": "IPv4", 00:22:03.060 "traddr": "10.0.0.1", 00:22:03.060 "trsvcid": "47810" 00:22:03.060 }, 00:22:03.060 "auth": { 00:22:03.060 "state": "completed", 00:22:03.060 "digest": "sha512", 00:22:03.060 "dhgroup": "null" 00:22:03.060 } 00:22:03.060 } 00:22:03.060 ]' 00:22:03.060 15:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:03.060 15:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.060 15:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:03.317 15:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:22:03.317 15:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:03.317 15:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.317 15:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.317 15:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.575 15:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:OTc5MTRiNzE4NjA0MjYxZjVhM2JhNzNiOTg3Mjk2ZjhmYjNjZDQ4ZWRkMTZiMjA0OTE1YTljY2ZkMGMyODI4YlOHlEQ=: 00:22:04.508 15:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.508 15:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:04.509 15:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.509 15:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.509 15:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.509 15:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:04.509 15:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:04.509 15:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:04.509 15:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:04.766 15:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:22:04.766 15:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:04.766 15:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:04.766 15:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:04.766 15:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:04.766 15:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:22:04.766 15:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.766 15:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.766 15:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.766 15:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:04.766 15:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:05.023 00:22:05.023 15:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:05.023 15:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:05.023 15:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.280 15:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.280 15:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.280 15:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.280 15:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.280 15:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.280 15:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:05.280 { 00:22:05.280 "cntlid": 105, 00:22:05.280 "qid": 0, 00:22:05.280 "state": "enabled", 00:22:05.280 "listen_address": { 00:22:05.280 "trtype": "TCP", 00:22:05.280 "adrfam": "IPv4", 00:22:05.280 "traddr": "10.0.0.2", 00:22:05.280 "trsvcid": "4420" 00:22:05.280 }, 00:22:05.280 "peer_address": { 00:22:05.280 "trtype": "TCP", 00:22:05.280 "adrfam": "IPv4", 00:22:05.280 "traddr": "10.0.0.1", 00:22:05.280 "trsvcid": "47836" 00:22:05.280 }, 00:22:05.280 "auth": { 00:22:05.280 "state": "completed", 00:22:05.280 "digest": "sha512", 00:22:05.280 "dhgroup": "ffdhe2048" 00:22:05.280 } 00:22:05.280 } 00:22:05.280 ]' 00:22:05.280 15:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:05.280 15:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:05.280 15:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:05.280 15:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:05.280 15:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:05.280 15:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.280 15:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.280 15:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.539 15:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjNkMjJjMzQyZGRiNzUyYzUyNGE3YTgxZTc0ZWQ3MGRlYjA3MjcyYWU2NjZlMGY5h2547w==: 00:22:06.470 15:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.470 15:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:06.470 15:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.471 15:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.471 15:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.471 15:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:06.471 15:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:06.471 15:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:06.728 15:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:22:06.728 15:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:06.728 15:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:06.728 15:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:06.728 15:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:06.728 15:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:22:06.728 15:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.728 15:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.728 15:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.728 15:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:06.728 15:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:07.293 00:22:07.293 15:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:07.293 15:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:07.293 15:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.293 15:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.293 15:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.293 15:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.293 15:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.293 15:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.293 15:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:07.293 { 00:22:07.293 "cntlid": 107, 00:22:07.293 "qid": 0, 00:22:07.293 "state": "enabled", 00:22:07.293 "listen_address": { 00:22:07.293 "trtype": "TCP", 00:22:07.293 "adrfam": "IPv4", 00:22:07.293 "traddr": "10.0.0.2", 00:22:07.293 "trsvcid": "4420" 00:22:07.293 }, 00:22:07.293 "peer_address": { 00:22:07.293 "trtype": "TCP", 00:22:07.293 "adrfam": "IPv4", 00:22:07.293 "traddr": "10.0.0.1", 00:22:07.293 "trsvcid": "47876" 00:22:07.293 }, 00:22:07.293 "auth": { 00:22:07.293 "state": "completed", 00:22:07.293 "digest": "sha512", 00:22:07.293 "dhgroup": "ffdhe2048" 00:22:07.293 } 00:22:07.293 } 00:22:07.293 ]' 00:22:07.293 15:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:07.551 15:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.551 15:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:07.551 15:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:07.551 15:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:07.551 15:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.551 15:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.551 15:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.809 15:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MzcxMjU5YzI4NGI5Y2Q5MDAwNDYxNTQ3NDU4NDlkNDOjdI4W: 00:22:08.741 15:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.741 15:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:08.741 15:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.741 15:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.741 15:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.741 15:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:08.741 15:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:08.741 15:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:08.998 15:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:22:08.998 15:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:08.998 15:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:08.998 15:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:08.998 15:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:08.998 15:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:22:08.998 15:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.998 15:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.998 15:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.998 15:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:08.998 15:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:09.255 00:22:09.255 15:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:09.255 15:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:09.255 15:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.511 15:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.511 15:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.511 15:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.511 15:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.511 15:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.511 15:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:09.511 { 00:22:09.511 "cntlid": 109, 00:22:09.511 "qid": 0, 00:22:09.511 "state": "enabled", 00:22:09.511 "listen_address": { 00:22:09.511 "trtype": "TCP", 00:22:09.511 "adrfam": "IPv4", 00:22:09.511 "traddr": "10.0.0.2", 00:22:09.511 "trsvcid": "4420" 00:22:09.511 }, 00:22:09.511 "peer_address": { 00:22:09.511 "trtype": "TCP", 00:22:09.511 "adrfam": "IPv4", 00:22:09.511 "traddr": "10.0.0.1", 00:22:09.511 "trsvcid": "54876" 00:22:09.511 }, 00:22:09.511 "auth": { 00:22:09.511 "state": "completed", 00:22:09.511 "digest": "sha512", 00:22:09.511 "dhgroup": "ffdhe2048" 00:22:09.511 } 00:22:09.511 } 00:22:09.511 ]' 00:22:09.511 15:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:09.511 15:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.511 15:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:09.768 15:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:09.768 15:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:09.768 15:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.768 15:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.768 15:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.024 15:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:M2U5MDY4OTJmMWE0MWQyY2YzMWE2ZDg0ZmM4OTNiNmVjM2NiZTMyMjJlZDZiZjE3Sabsew==: 00:22:10.953 15:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.953 15:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:10.953 15:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.953 15:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.953 15:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.953 15:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:10.953 15:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:10.953 15:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:11.210 15:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:22:11.210 15:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:11.210 15:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:11.210 15:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:11.210 15:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:11.211 15:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:22:11.211 15:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.211 15:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.211 15:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.211 15:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.211 15:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.468 00:22:11.468 15:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:11.468 15:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:11.468 15:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.725 15:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.725 15:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.725 15:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.725 15:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.725 15:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.725 15:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:11.725 { 00:22:11.725 "cntlid": 111, 00:22:11.725 "qid": 0, 00:22:11.725 "state": "enabled", 00:22:11.725 "listen_address": { 00:22:11.725 "trtype": "TCP", 00:22:11.725 "adrfam": "IPv4", 00:22:11.725 "traddr": "10.0.0.2", 00:22:11.725 "trsvcid": "4420" 00:22:11.725 }, 00:22:11.725 "peer_address": { 00:22:11.725 "trtype": "TCP", 00:22:11.725 "adrfam": "IPv4", 00:22:11.725 "traddr": "10.0.0.1", 00:22:11.725 "trsvcid": "54898" 00:22:11.725 }, 00:22:11.725 "auth": { 00:22:11.725 "state": "completed", 00:22:11.725 "digest": "sha512", 00:22:11.725 "dhgroup": "ffdhe2048" 00:22:11.725 } 00:22:11.725 } 00:22:11.725 ]' 00:22:11.725 15:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:11.725 15:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.725 15:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:11.725 15:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:11.725 15:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:11.725 15:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.725 15:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.725 15:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.983 15:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:OTc5MTRiNzE4NjA0MjYxZjVhM2JhNzNiOTg3Mjk2ZjhmYjNjZDQ4ZWRkMTZiMjA0OTE1YTljY2ZkMGMyODI4YlOHlEQ=: 00:22:13.412 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.412 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:13.412 15:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.412 15:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.412 15:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.412 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:13.412 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:13.412 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:13.412 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:13.412 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:22:13.412 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:13.412 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:13.412 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:13.412 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:13.412 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:22:13.412 15:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.412 15:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.412 15:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.412 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:13.412 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:13.669 00:22:13.669 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:13.669 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:13.669 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.926 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.926 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.926 15:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.926 15:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.926 15:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.926 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:13.926 { 00:22:13.926 "cntlid": 113, 00:22:13.926 "qid": 0, 00:22:13.926 "state": "enabled", 00:22:13.926 "listen_address": { 00:22:13.926 "trtype": "TCP", 00:22:13.926 "adrfam": "IPv4", 00:22:13.926 "traddr": "10.0.0.2", 00:22:13.926 "trsvcid": "4420" 00:22:13.926 }, 00:22:13.926 "peer_address": { 00:22:13.926 "trtype": "TCP", 00:22:13.926 "adrfam": "IPv4", 00:22:13.926 "traddr": "10.0.0.1", 00:22:13.926 "trsvcid": "54936" 00:22:13.926 }, 00:22:13.926 "auth": { 00:22:13.926 "state": "completed", 00:22:13.926 "digest": "sha512", 00:22:13.926 "dhgroup": "ffdhe3072" 00:22:13.926 } 00:22:13.926 } 00:22:13.926 ]' 00:22:13.926 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:13.926 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.926 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:13.926 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:13.926 15:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:13.926 15:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.926 15:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.926 15:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.183 15:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjNkMjJjMzQyZGRiNzUyYzUyNGE3YTgxZTc0ZWQ3MGRlYjA3MjcyYWU2NjZlMGY5h2547w==: 00:22:15.116 15:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.116 15:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:15.116 15:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.116 15:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.116 15:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.116 15:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:15.116 15:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:15.116 15:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:15.373 15:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:22:15.373 15:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:15.373 15:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:15.373 15:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:15.373 15:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:15.373 15:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:22:15.373 15:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.373 15:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.373 15:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.373 15:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:15.373 15:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:15.937 00:22:15.937 15:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:15.937 15:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:15.937 15:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.937 15:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.937 15:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.937 15:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.937 15:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.937 15:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.937 15:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:15.937 { 00:22:15.937 "cntlid": 115, 00:22:15.937 "qid": 0, 00:22:15.937 "state": "enabled", 00:22:15.937 "listen_address": { 00:22:15.937 "trtype": "TCP", 00:22:15.937 "adrfam": "IPv4", 00:22:15.937 "traddr": "10.0.0.2", 00:22:15.937 "trsvcid": "4420" 00:22:15.937 }, 00:22:15.937 "peer_address": { 00:22:15.937 "trtype": "TCP", 00:22:15.937 "adrfam": "IPv4", 00:22:15.937 "traddr": "10.0.0.1", 00:22:15.937 "trsvcid": "54964" 00:22:15.937 }, 00:22:15.937 "auth": { 00:22:15.937 "state": "completed", 00:22:15.937 "digest": "sha512", 00:22:15.937 "dhgroup": "ffdhe3072" 00:22:15.937 } 00:22:15.937 } 00:22:15.938 ]' 00:22:15.938 15:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:16.195 15:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.195 15:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:16.195 15:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:16.195 15:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:16.195 15:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.195 15:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.195 15:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.453 15:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MzcxMjU5YzI4NGI5Y2Q5MDAwNDYxNTQ3NDU4NDlkNDOjdI4W: 00:22:17.384 15:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.384 15:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:17.384 15:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.384 15:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.384 15:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.384 15:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:17.384 15:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:17.384 15:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:17.642 15:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:22:17.642 15:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:17.642 15:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:17.642 15:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:17.642 15:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:17.642 15:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:22:17.642 15:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.642 15:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.642 15:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.642 15:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:17.642 15:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:17.900 00:22:17.900 15:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:17.900 15:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:17.900 15:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.159 15:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.159 15:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.159 15:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.159 15:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.159 15:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.159 15:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:18.159 { 00:22:18.159 "cntlid": 117, 00:22:18.159 "qid": 0, 00:22:18.159 "state": "enabled", 00:22:18.159 "listen_address": { 00:22:18.159 "trtype": "TCP", 00:22:18.159 "adrfam": "IPv4", 00:22:18.159 "traddr": "10.0.0.2", 00:22:18.159 "trsvcid": "4420" 00:22:18.159 }, 00:22:18.159 "peer_address": { 00:22:18.159 "trtype": "TCP", 00:22:18.159 "adrfam": "IPv4", 00:22:18.159 "traddr": "10.0.0.1", 00:22:18.159 "trsvcid": "54988" 00:22:18.159 }, 00:22:18.159 "auth": { 00:22:18.159 "state": "completed", 00:22:18.159 "digest": "sha512", 00:22:18.159 "dhgroup": "ffdhe3072" 00:22:18.159 } 00:22:18.159 } 00:22:18.159 ]' 00:22:18.159 15:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:18.159 15:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.159 15:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:18.416 15:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:18.416 15:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:18.416 15:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.416 15:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.416 15:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.673 15:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:M2U5MDY4OTJmMWE0MWQyY2YzMWE2ZDg0ZmM4OTNiNmVjM2NiZTMyMjJlZDZiZjE3Sabsew==: 00:22:19.604 15:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.604 15:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:19.604 15:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.604 15:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.604 15:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.604 15:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:19.604 15:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:19.604 15:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:19.861 15:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:22:19.861 15:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:19.861 15:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:19.861 15:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:19.861 15:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:19.861 15:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:22:19.861 15:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.861 15:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.861 15:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.861 15:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.861 15:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:20.118 00:22:20.118 15:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:20.118 15:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:20.118 15:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.375 15:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.375 15:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.375 15:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.375 15:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.375 15:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.375 15:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:20.375 { 00:22:20.375 "cntlid": 119, 00:22:20.375 "qid": 0, 00:22:20.375 "state": "enabled", 00:22:20.375 "listen_address": { 00:22:20.375 "trtype": "TCP", 00:22:20.375 "adrfam": "IPv4", 00:22:20.375 "traddr": "10.0.0.2", 00:22:20.375 "trsvcid": "4420" 00:22:20.375 }, 00:22:20.375 "peer_address": { 00:22:20.376 "trtype": "TCP", 00:22:20.376 "adrfam": "IPv4", 00:22:20.376 "traddr": "10.0.0.1", 00:22:20.376 "trsvcid": "47952" 00:22:20.376 }, 00:22:20.376 "auth": { 00:22:20.376 "state": "completed", 00:22:20.376 "digest": "sha512", 00:22:20.376 "dhgroup": "ffdhe3072" 00:22:20.376 } 00:22:20.376 } 00:22:20.376 ]' 00:22:20.376 15:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:20.633 15:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:20.633 15:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:20.633 15:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:20.633 15:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:20.633 15:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.633 15:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.633 15:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.891 15:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:OTc5MTRiNzE4NjA0MjYxZjVhM2JhNzNiOTg3Mjk2ZjhmYjNjZDQ4ZWRkMTZiMjA0OTE1YTljY2ZkMGMyODI4YlOHlEQ=: 00:22:21.824 15:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.824 15:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:21.824 15:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.824 15:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.824 15:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.824 15:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:21.824 15:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:21.824 15:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:21.824 15:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:22.081 15:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:22:22.081 15:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:22.081 15:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:22.081 15:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:22.081 15:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:22.081 15:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:22:22.081 15:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.081 15:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.081 15:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.081 15:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:22.081 15:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:22.338 00:22:22.338 15:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:22.338 15:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:22.338 15:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.596 15:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.596 15:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.596 15:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.596 15:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.596 15:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.596 15:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:22.596 { 00:22:22.596 "cntlid": 121, 00:22:22.596 "qid": 0, 00:22:22.596 "state": "enabled", 00:22:22.596 "listen_address": { 00:22:22.596 "trtype": "TCP", 00:22:22.596 "adrfam": "IPv4", 00:22:22.596 "traddr": "10.0.0.2", 00:22:22.596 "trsvcid": "4420" 00:22:22.596 }, 00:22:22.596 "peer_address": { 00:22:22.596 "trtype": "TCP", 00:22:22.596 "adrfam": "IPv4", 00:22:22.596 "traddr": "10.0.0.1", 00:22:22.596 "trsvcid": "47978" 00:22:22.596 }, 00:22:22.596 "auth": { 00:22:22.596 "state": "completed", 00:22:22.596 "digest": "sha512", 00:22:22.596 "dhgroup": "ffdhe4096" 00:22:22.596 } 00:22:22.596 } 00:22:22.596 ]' 00:22:22.596 15:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:22.596 15:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:22.596 15:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:22.596 15:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:22.596 15:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:22.853 15:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.853 15:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.853 15:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.111 15:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjNkMjJjMzQyZGRiNzUyYzUyNGE3YTgxZTc0ZWQ3MGRlYjA3MjcyYWU2NjZlMGY5h2547w==: 00:22:24.042 15:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.042 15:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:24.042 15:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.042 15:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.042 15:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.042 15:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:24.042 15:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:24.042 15:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:24.042 15:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:22:24.042 15:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:24.042 15:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:24.042 15:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:24.042 15:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:24.042 15:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:22:24.042 15:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.042 15:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.042 15:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.042 15:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:24.042 15:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:24.607 00:22:24.607 15:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:24.607 15:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.607 15:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:24.864 15:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.864 15:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.864 15:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.864 15:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.864 15:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.864 15:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:24.864 { 00:22:24.864 "cntlid": 123, 00:22:24.864 "qid": 0, 00:22:24.864 "state": "enabled", 00:22:24.864 "listen_address": { 00:22:24.864 "trtype": "TCP", 00:22:24.864 "adrfam": "IPv4", 00:22:24.864 "traddr": "10.0.0.2", 00:22:24.864 "trsvcid": "4420" 00:22:24.864 }, 00:22:24.864 "peer_address": { 00:22:24.864 "trtype": "TCP", 00:22:24.864 "adrfam": "IPv4", 00:22:24.864 "traddr": "10.0.0.1", 00:22:24.864 "trsvcid": "47998" 00:22:24.864 }, 00:22:24.864 "auth": { 00:22:24.864 "state": "completed", 00:22:24.864 "digest": "sha512", 00:22:24.864 "dhgroup": "ffdhe4096" 00:22:24.864 } 00:22:24.864 } 00:22:24.864 ]' 00:22:24.864 15:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:24.864 15:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:24.864 15:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:24.864 15:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:24.864 15:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:24.864 15:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.864 15:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.864 15:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.122 15:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MzcxMjU5YzI4NGI5Y2Q5MDAwNDYxNTQ3NDU4NDlkNDOjdI4W: 00:22:26.054 15:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.054 15:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:26.054 15:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.054 15:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.054 15:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.054 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:26.054 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:26.054 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:26.311 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:22:26.311 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:26.311 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:26.311 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:26.311 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:26.311 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:22:26.311 15:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.311 15:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.311 15:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.311 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:26.311 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:26.568 00:22:26.568 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:26.568 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.568 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:26.825 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.825 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.825 15:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.825 15:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.825 15:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.825 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:26.825 { 00:22:26.825 "cntlid": 125, 00:22:26.825 "qid": 0, 00:22:26.825 "state": "enabled", 00:22:26.825 "listen_address": { 00:22:26.825 "trtype": "TCP", 00:22:26.825 "adrfam": "IPv4", 00:22:26.825 "traddr": "10.0.0.2", 00:22:26.825 "trsvcid": "4420" 00:22:26.825 }, 00:22:26.825 "peer_address": { 00:22:26.825 "trtype": "TCP", 00:22:26.825 "adrfam": "IPv4", 00:22:26.825 "traddr": "10.0.0.1", 00:22:26.825 "trsvcid": "48034" 00:22:26.825 }, 00:22:26.825 "auth": { 00:22:26.825 "state": "completed", 00:22:26.825 "digest": "sha512", 00:22:26.825 "dhgroup": "ffdhe4096" 00:22:26.825 } 00:22:26.825 } 00:22:26.825 ]' 00:22:26.825 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:26.825 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:26.825 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:27.083 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:27.083 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:27.083 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.083 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.083 15:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.341 15:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:M2U5MDY4OTJmMWE0MWQyY2YzMWE2ZDg0ZmM4OTNiNmVjM2NiZTMyMjJlZDZiZjE3Sabsew==: 00:22:28.301 15:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.301 15:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:28.301 15:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.301 15:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.301 15:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.301 15:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:28.301 15:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:28.301 15:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:28.559 15:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:22:28.559 15:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:28.559 15:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:28.559 15:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:28.559 15:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:28.559 15:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:22:28.559 15:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.559 15:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.559 15:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.559 15:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:28.559 15:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:28.817 00:22:28.817 15:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:28.817 15:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:28.817 15:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.074 15:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.074 15:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.074 15:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.074 15:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.074 15:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.074 15:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:29.074 { 00:22:29.074 "cntlid": 127, 00:22:29.074 "qid": 0, 00:22:29.074 "state": "enabled", 00:22:29.074 "listen_address": { 00:22:29.074 "trtype": "TCP", 00:22:29.074 "adrfam": "IPv4", 00:22:29.074 "traddr": "10.0.0.2", 00:22:29.074 "trsvcid": "4420" 00:22:29.074 }, 00:22:29.074 "peer_address": { 00:22:29.074 "trtype": "TCP", 00:22:29.074 "adrfam": "IPv4", 00:22:29.074 "traddr": "10.0.0.1", 00:22:29.074 "trsvcid": "38304" 00:22:29.074 }, 00:22:29.074 "auth": { 00:22:29.074 "state": "completed", 00:22:29.074 "digest": "sha512", 00:22:29.074 "dhgroup": "ffdhe4096" 00:22:29.074 } 00:22:29.074 } 00:22:29.074 ]' 00:22:29.074 15:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:29.074 15:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:29.074 15:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:29.331 15:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:29.331 15:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:29.331 15:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.331 15:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.331 15:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.588 15:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:OTc5MTRiNzE4NjA0MjYxZjVhM2JhNzNiOTg3Mjk2ZjhmYjNjZDQ4ZWRkMTZiMjA0OTE1YTljY2ZkMGMyODI4YlOHlEQ=: 00:22:30.520 15:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.520 15:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:30.520 15:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.520 15:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.520 15:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.520 15:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:30.520 15:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:30.520 15:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:30.520 15:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:30.777 15:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:22:30.777 15:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:30.777 15:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:30.777 15:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:30.777 15:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:30.777 15:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:22:30.777 15:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.777 15:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.777 15:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.777 15:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:30.777 15:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:31.341 00:22:31.341 15:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:31.341 15:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:31.341 15:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.341 15:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.341 15:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.341 15:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.341 15:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.599 15:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.599 15:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:31.599 { 00:22:31.599 "cntlid": 129, 00:22:31.599 "qid": 0, 00:22:31.599 "state": "enabled", 00:22:31.600 "listen_address": { 00:22:31.600 "trtype": "TCP", 00:22:31.600 "adrfam": "IPv4", 00:22:31.600 "traddr": "10.0.0.2", 00:22:31.600 "trsvcid": "4420" 00:22:31.600 }, 00:22:31.600 "peer_address": { 00:22:31.600 "trtype": "TCP", 00:22:31.600 "adrfam": "IPv4", 00:22:31.600 "traddr": "10.0.0.1", 00:22:31.600 "trsvcid": "38330" 00:22:31.600 }, 00:22:31.600 "auth": { 00:22:31.600 "state": "completed", 00:22:31.600 "digest": "sha512", 00:22:31.600 "dhgroup": "ffdhe6144" 00:22:31.600 } 00:22:31.600 } 00:22:31.600 ]' 00:22:31.600 15:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:31.600 15:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:31.600 15:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:31.600 15:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:31.600 15:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:31.600 15:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.600 15:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.600 15:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.858 15:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjNkMjJjMzQyZGRiNzUyYzUyNGE3YTgxZTc0ZWQ3MGRlYjA3MjcyYWU2NjZlMGY5h2547w==: 00:22:32.790 15:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.790 15:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:32.790 15:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.790 15:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.790 15:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.790 15:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:32.790 15:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:32.790 15:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:33.047 15:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:22:33.047 15:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:33.047 15:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:33.047 15:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:33.047 15:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:33.047 15:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:22:33.047 15:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.047 15:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.047 15:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.047 15:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:33.047 15:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:33.612 00:22:33.612 15:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:33.612 15:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.612 15:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:33.869 15:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.869 15:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.869 15:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.869 15:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.870 15:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.870 15:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:33.870 { 00:22:33.870 "cntlid": 131, 00:22:33.870 "qid": 0, 00:22:33.870 "state": "enabled", 00:22:33.870 "listen_address": { 00:22:33.870 "trtype": "TCP", 00:22:33.870 "adrfam": "IPv4", 00:22:33.870 "traddr": "10.0.0.2", 00:22:33.870 "trsvcid": "4420" 00:22:33.870 }, 00:22:33.870 "peer_address": { 00:22:33.870 "trtype": "TCP", 00:22:33.870 "adrfam": "IPv4", 00:22:33.870 "traddr": "10.0.0.1", 00:22:33.870 "trsvcid": "38358" 00:22:33.870 }, 00:22:33.870 "auth": { 00:22:33.870 "state": "completed", 00:22:33.870 "digest": "sha512", 00:22:33.870 "dhgroup": "ffdhe6144" 00:22:33.870 } 00:22:33.870 } 00:22:33.870 ]' 00:22:33.870 15:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:33.870 15:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:33.870 15:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:33.870 15:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:33.870 15:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:33.870 15:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.870 15:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.870 15:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.127 15:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MzcxMjU5YzI4NGI5Y2Q5MDAwNDYxNTQ3NDU4NDlkNDOjdI4W: 00:22:35.058 15:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.058 15:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:35.058 15:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.058 15:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.058 15:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.058 15:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:35.058 15:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:35.058 15:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:35.316 15:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:22:35.316 15:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:35.316 15:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:35.316 15:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:35.316 15:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:35.316 15:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:22:35.316 15:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.316 15:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.316 15:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.316 15:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:35.316 15:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:35.882 00:22:35.882 15:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:35.882 15:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:35.882 15:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.140 15:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.140 15:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.140 15:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.140 15:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.140 15:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.140 15:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:36.140 { 00:22:36.140 "cntlid": 133, 00:22:36.140 "qid": 0, 00:22:36.140 "state": "enabled", 00:22:36.140 "listen_address": { 00:22:36.140 "trtype": "TCP", 00:22:36.140 "adrfam": "IPv4", 00:22:36.140 "traddr": "10.0.0.2", 00:22:36.140 "trsvcid": "4420" 00:22:36.140 }, 00:22:36.140 "peer_address": { 00:22:36.140 "trtype": "TCP", 00:22:36.140 "adrfam": "IPv4", 00:22:36.140 "traddr": "10.0.0.1", 00:22:36.140 "trsvcid": "38382" 00:22:36.140 }, 00:22:36.140 "auth": { 00:22:36.140 "state": "completed", 00:22:36.140 "digest": "sha512", 00:22:36.140 "dhgroup": "ffdhe6144" 00:22:36.140 } 00:22:36.140 } 00:22:36.140 ]' 00:22:36.140 15:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:36.140 15:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:36.140 15:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:36.140 15:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:36.140 15:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:36.397 15:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.397 15:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.397 15:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.654 15:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:M2U5MDY4OTJmMWE0MWQyY2YzMWE2ZDg0ZmM4OTNiNmVjM2NiZTMyMjJlZDZiZjE3Sabsew==: 00:22:37.586 15:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.586 15:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:37.586 15:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.586 15:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.586 15:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.586 15:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:37.586 15:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:37.586 15:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:37.845 15:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:22:37.845 15:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:37.845 15:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:37.845 15:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:37.845 15:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:37.845 15:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:22:37.845 15:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.845 15:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.845 15:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.845 15:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:37.845 15:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.410 00:22:38.410 15:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:38.410 15:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:38.410 15:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.668 15:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.668 15:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.668 15:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.668 15:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.668 15:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.668 15:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:38.668 { 00:22:38.668 "cntlid": 135, 00:22:38.668 "qid": 0, 00:22:38.668 "state": "enabled", 00:22:38.668 "listen_address": { 00:22:38.668 "trtype": "TCP", 00:22:38.668 "adrfam": "IPv4", 00:22:38.668 "traddr": "10.0.0.2", 00:22:38.668 "trsvcid": "4420" 00:22:38.668 }, 00:22:38.668 "peer_address": { 00:22:38.668 "trtype": "TCP", 00:22:38.668 "adrfam": "IPv4", 00:22:38.668 "traddr": "10.0.0.1", 00:22:38.668 "trsvcid": "38418" 00:22:38.668 }, 00:22:38.668 "auth": { 00:22:38.668 "state": "completed", 00:22:38.668 "digest": "sha512", 00:22:38.668 "dhgroup": "ffdhe6144" 00:22:38.668 } 00:22:38.668 } 00:22:38.668 ]' 00:22:38.668 15:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:38.668 15:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:38.668 15:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:38.668 15:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:38.668 15:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:38.668 15:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.668 15:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.668 15:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.925 15:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:OTc5MTRiNzE4NjA0MjYxZjVhM2JhNzNiOTg3Mjk2ZjhmYjNjZDQ4ZWRkMTZiMjA0OTE1YTljY2ZkMGMyODI4YlOHlEQ=: 00:22:39.858 15:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.858 15:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:39.858 15:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.858 15:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.858 15:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.858 15:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:39.858 15:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:39.858 15:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:39.858 15:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:40.116 15:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:22:40.116 15:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:40.116 15:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:40.116 15:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:40.116 15:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:40.116 15:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:22:40.116 15:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.116 15:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.116 15:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.116 15:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:40.116 15:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:41.048 00:22:41.048 15:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:41.048 15:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:41.048 15:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.305 15:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.305 15:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.305 15:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.305 15:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.305 15:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.305 15:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:41.305 { 00:22:41.305 "cntlid": 137, 00:22:41.305 "qid": 0, 00:22:41.305 "state": "enabled", 00:22:41.305 "listen_address": { 00:22:41.305 "trtype": "TCP", 00:22:41.305 "adrfam": "IPv4", 00:22:41.305 "traddr": "10.0.0.2", 00:22:41.305 "trsvcid": "4420" 00:22:41.305 }, 00:22:41.305 "peer_address": { 00:22:41.305 "trtype": "TCP", 00:22:41.305 "adrfam": "IPv4", 00:22:41.305 "traddr": "10.0.0.1", 00:22:41.305 "trsvcid": "33184" 00:22:41.305 }, 00:22:41.305 "auth": { 00:22:41.305 "state": "completed", 00:22:41.305 "digest": "sha512", 00:22:41.305 "dhgroup": "ffdhe8192" 00:22:41.305 } 00:22:41.305 } 00:22:41.305 ]' 00:22:41.305 15:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:41.305 15:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:41.305 15:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:41.305 15:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:41.305 15:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:41.305 15:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.305 15:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.305 15:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.563 15:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjNkMjJjMzQyZGRiNzUyYzUyNGE3YTgxZTc0ZWQ3MGRlYjA3MjcyYWU2NjZlMGY5h2547w==: 00:22:42.495 15:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.495 15:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:42.495 15:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.495 15:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.495 15:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.495 15:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:42.495 15:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:42.495 15:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:42.751 15:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:22:42.751 15:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:42.751 15:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:42.751 15:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:42.751 15:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:42.751 15:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:22:42.751 15:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.752 15:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.752 15:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.752 15:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:42.752 15:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:43.721 00:22:43.721 15:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:43.721 15:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.721 15:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:43.977 15:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.977 15:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.977 15:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.977 15:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.977 15:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.977 15:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:43.977 { 00:22:43.977 "cntlid": 139, 00:22:43.977 "qid": 0, 00:22:43.977 "state": "enabled", 00:22:43.977 "listen_address": { 00:22:43.977 "trtype": "TCP", 00:22:43.977 "adrfam": "IPv4", 00:22:43.977 "traddr": "10.0.0.2", 00:22:43.977 "trsvcid": "4420" 00:22:43.977 }, 00:22:43.977 "peer_address": { 00:22:43.977 "trtype": "TCP", 00:22:43.977 "adrfam": "IPv4", 00:22:43.977 "traddr": "10.0.0.1", 00:22:43.977 "trsvcid": "33212" 00:22:43.977 }, 00:22:43.977 "auth": { 00:22:43.977 "state": "completed", 00:22:43.977 "digest": "sha512", 00:22:43.977 "dhgroup": "ffdhe8192" 00:22:43.977 } 00:22:43.977 } 00:22:43.977 ]' 00:22:43.977 15:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:43.978 15:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:43.978 15:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:43.978 15:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:43.978 15:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:43.978 15:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.978 15:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.978 15:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.234 15:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MzcxMjU5YzI4NGI5Y2Q5MDAwNDYxNTQ3NDU4NDlkNDOjdI4W: 00:22:45.163 15:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.163 15:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:45.163 15:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.163 15:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.163 15:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.164 15:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:45.164 15:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:45.164 15:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:45.422 15:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:22:45.422 15:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:45.422 15:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:45.422 15:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:45.422 15:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:45.422 15:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:22:45.422 15:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.422 15:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.422 15:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.422 15:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:45.422 15:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:46.355 00:22:46.355 15:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:46.355 15:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:46.355 15:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.613 15:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.613 15:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.613 15:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.613 15:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.613 15:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.613 15:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:46.613 { 00:22:46.613 "cntlid": 141, 00:22:46.613 "qid": 0, 00:22:46.613 "state": "enabled", 00:22:46.613 "listen_address": { 00:22:46.613 "trtype": "TCP", 00:22:46.613 "adrfam": "IPv4", 00:22:46.613 "traddr": "10.0.0.2", 00:22:46.613 "trsvcid": "4420" 00:22:46.613 }, 00:22:46.613 "peer_address": { 00:22:46.613 "trtype": "TCP", 00:22:46.613 "adrfam": "IPv4", 00:22:46.613 "traddr": "10.0.0.1", 00:22:46.613 "trsvcid": "33236" 00:22:46.613 }, 00:22:46.613 "auth": { 00:22:46.613 "state": "completed", 00:22:46.613 "digest": "sha512", 00:22:46.613 "dhgroup": "ffdhe8192" 00:22:46.613 } 00:22:46.613 } 00:22:46.613 ]' 00:22:46.613 15:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:46.613 15:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:46.613 15:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:46.613 15:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:46.613 15:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:46.870 15:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.870 15:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.870 15:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.127 15:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:M2U5MDY4OTJmMWE0MWQyY2YzMWE2ZDg0ZmM4OTNiNmVjM2NiZTMyMjJlZDZiZjE3Sabsew==: 00:22:48.059 15:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.059 15:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:48.059 15:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.059 15:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.059 15:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.059 15:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:48.059 15:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:48.059 15:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:48.059 15:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:22:48.059 15:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:48.059 15:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:48.059 15:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:48.059 15:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:48.059 15:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:22:48.059 15:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.059 15:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.059 15:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.059 15:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:48.059 15:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:48.991 00:22:48.991 15:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:48.991 15:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:48.991 15:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.248 15:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.248 15:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.248 15:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.248 15:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.248 15:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.248 15:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:49.248 { 00:22:49.248 "cntlid": 143, 00:22:49.248 "qid": 0, 00:22:49.249 "state": "enabled", 00:22:49.249 "listen_address": { 00:22:49.249 "trtype": "TCP", 00:22:49.249 "adrfam": "IPv4", 00:22:49.249 "traddr": "10.0.0.2", 00:22:49.249 "trsvcid": "4420" 00:22:49.249 }, 00:22:49.249 "peer_address": { 00:22:49.249 "trtype": "TCP", 00:22:49.249 "adrfam": "IPv4", 00:22:49.249 "traddr": "10.0.0.1", 00:22:49.249 "trsvcid": "44608" 00:22:49.249 }, 00:22:49.249 "auth": { 00:22:49.249 "state": "completed", 00:22:49.249 "digest": "sha512", 00:22:49.249 "dhgroup": "ffdhe8192" 00:22:49.249 } 00:22:49.249 } 00:22:49.249 ]' 00:22:49.249 15:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:49.249 15:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:49.249 15:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:49.506 15:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:49.506 15:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:49.506 15:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.506 15:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.506 15:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.764 15:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:OTc5MTRiNzE4NjA0MjYxZjVhM2JhNzNiOTg3Mjk2ZjhmYjNjZDQ4ZWRkMTZiMjA0OTE1YTljY2ZkMGMyODI4YlOHlEQ=: 00:22:50.696 15:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.696 15:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:50.696 15:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.696 15:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.696 15:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.696 15:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:22:50.696 15:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:22:50.696 15:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:22:50.696 15:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:50.696 15:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:50.696 15:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:50.953 15:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:22:50.953 15:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:50.953 15:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:50.953 15:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:50.953 15:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:50.953 15:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:22:50.953 15:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.953 15:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.953 15:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.953 15:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:50.953 15:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:51.886 00:22:51.886 15:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:51.886 15:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:51.886 15:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.886 15:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.886 15:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.886 15:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.886 15:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.143 15:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.143 15:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:52.143 { 00:22:52.143 "cntlid": 145, 00:22:52.143 "qid": 0, 00:22:52.143 "state": "enabled", 00:22:52.143 "listen_address": { 00:22:52.143 "trtype": "TCP", 00:22:52.143 "adrfam": "IPv4", 00:22:52.143 "traddr": "10.0.0.2", 00:22:52.143 "trsvcid": "4420" 00:22:52.143 }, 00:22:52.143 "peer_address": { 00:22:52.143 "trtype": "TCP", 00:22:52.143 "adrfam": "IPv4", 00:22:52.143 "traddr": "10.0.0.1", 00:22:52.144 "trsvcid": "44634" 00:22:52.144 }, 00:22:52.144 "auth": { 00:22:52.144 "state": "completed", 00:22:52.144 "digest": "sha512", 00:22:52.144 "dhgroup": "ffdhe8192" 00:22:52.144 } 00:22:52.144 } 00:22:52.144 ]' 00:22:52.144 15:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:52.144 15:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:52.144 15:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:52.144 15:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:52.144 15:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:52.144 15:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:52.144 15:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.144 15:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.402 15:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjNkMjJjMzQyZGRiNzUyYzUyNGE3YTgxZTc0ZWQ3MGRlYjA3MjcyYWU2NjZlMGY5h2547w==: 00:22:53.334 15:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.334 15:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:53.334 15:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.334 15:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.334 15:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.334 15:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:22:53.334 15:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.334 15:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.334 15:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.334 15:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:53.334 15:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:53.334 15:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:53.334 15:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:53.334 15:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:53.334 15:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:53.334 15:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:53.334 15:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:53.334 15:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:54.266 request: 00:22:54.266 { 00:22:54.266 "name": "nvme0", 00:22:54.266 "trtype": "tcp", 00:22:54.266 "traddr": "10.0.0.2", 00:22:54.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:54.266 "adrfam": "ipv4", 00:22:54.266 "trsvcid": "4420", 00:22:54.266 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:54.266 "dhchap_key": "key2", 00:22:54.266 "method": "bdev_nvme_attach_controller", 00:22:54.266 "req_id": 1 00:22:54.266 } 00:22:54.266 Got JSON-RPC error response 00:22:54.266 response: 00:22:54.266 { 00:22:54.266 "code": -32602, 00:22:54.266 "message": "Invalid parameters" 00:22:54.266 } 00:22:54.266 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:54.266 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:54.266 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:54.266 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:54.266 15:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:54.266 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.266 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.266 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.266 15:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:22:54.266 15:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:22:54.266 15:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1324415 00:22:54.266 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 1324415 ']' 00:22:54.266 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 1324415 00:22:54.266 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:54.266 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:54.266 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1324415 00:22:54.266 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:54.266 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:54.266 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1324415' 00:22:54.267 killing process with pid 1324415 00:22:54.267 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 1324415 00:22:54.267 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 1324415 00:22:54.831 15:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:54.831 15:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:54.831 15:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:54.831 15:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:54.831 15:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:54.831 15:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:54.831 15:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:54.831 rmmod nvme_tcp 00:22:54.831 rmmod nvme_fabrics 00:22:54.831 rmmod nvme_keyring 00:22:54.831 15:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:54.831 15:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:54.831 15:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:54.831 15:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1324381 ']' 00:22:54.831 15:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1324381 00:22:54.831 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 1324381 ']' 00:22:54.831 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 1324381 00:22:54.831 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:54.831 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:54.831 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1324381 00:22:54.832 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:54.832 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:54.832 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1324381' 00:22:54.832 killing process with pid 1324381 00:22:54.832 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 1324381 00:22:54.832 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 1324381 00:22:55.090 15:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:55.090 15:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:55.090 15:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:55.090 15:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:55.090 15:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:55.090 15:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.090 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:55.090 15:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.990 15:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:56.990 15:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Jqq /tmp/spdk.key-sha256.Zot /tmp/spdk.key-sha384.MYR /tmp/spdk.key-sha512.9OG /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:56.990 00:22:56.990 real 2m57.390s 00:22:56.990 user 6m51.465s 00:22:56.990 sys 0m21.241s 00:22:56.990 15:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:56.990 15:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.990 ************************************ 00:22:56.990 END TEST nvmf_auth_target 00:22:56.990 ************************************ 00:22:56.990 15:41:10 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:56.990 15:41:10 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:56.990 15:41:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:22:56.990 15:41:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:56.990 15:41:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:56.990 ************************************ 00:22:56.990 START TEST nvmf_bdevio_no_huge 00:22:56.990 ************************************ 00:22:56.990 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:56.990 * Looking for test storage... 00:22:56.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:57.248 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:57.249 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:57.249 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:57.249 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:57.249 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:57.249 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:57.249 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:57.249 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:57.249 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:57.249 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:57.249 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:57.249 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:57.249 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:57.249 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.249 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:57.249 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.249 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:57.249 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:57.249 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:57.249 15:41:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:59.776 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:59.776 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:59.776 Found net devices under 0000:09:00.0: cvl_0_0 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:59.776 Found net devices under 0000:09:00.1: cvl_0_1 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:59.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:22:59.776 00:22:59.776 --- 10.0.0.2 ping statistics --- 00:22:59.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.776 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:22:59.776 00:22:59.776 --- 10.0.0.1 ping statistics --- 00:22:59.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.776 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:59.776 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:59.777 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.777 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:59.777 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:59.777 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.777 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:59.777 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:59.777 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:59.777 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:59.777 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:59.777 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:59.777 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1348393 00:22:59.777 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:59.777 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1348393 00:22:59.777 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 1348393 ']' 00:22:59.777 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.777 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:59.777 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.777 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:59.777 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:59.777 [2024-05-15 15:41:12.710375] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:22:59.777 [2024-05-15 15:41:12.710451] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:59.777 [2024-05-15 15:41:12.764584] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:59.777 [2024-05-15 15:41:12.787129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:59.777 [2024-05-15 15:41:12.864618] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.777 [2024-05-15 15:41:12.864670] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.777 [2024-05-15 15:41:12.864693] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.777 [2024-05-15 15:41:12.864705] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.777 [2024-05-15 15:41:12.864715] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.777 [2024-05-15 15:41:12.864839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:59.777 [2024-05-15 15:41:12.864901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:59.777 [2024-05-15 15:41:12.864968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:59.777 [2024-05-15 15:41:12.864971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:00.074 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:00.074 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:23:00.074 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:00.074 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:00.074 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:00.074 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.074 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:00.074 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.074 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:00.074 [2024-05-15 15:41:12.973619] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.074 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.074 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:00.074 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.074 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:00.074 Malloc0 00:23:00.074 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.074 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:00.074 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.074 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:00.074 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.074 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:00.074 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.074 15:41:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:00.074 15:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.074 15:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:00.074 15:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.074 15:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:00.074 [2024-05-15 15:41:13.011097] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:00.074 [2024-05-15 15:41:13.011408] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.074 15:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.074 15:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:00.074 15:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:00.074 15:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:23:00.074 15:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:23:00.074 15:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.074 15:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.074 { 00:23:00.074 "params": { 00:23:00.075 "name": "Nvme$subsystem", 00:23:00.075 "trtype": "$TEST_TRANSPORT", 00:23:00.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.075 "adrfam": "ipv4", 00:23:00.075 "trsvcid": "$NVMF_PORT", 00:23:00.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.075 "hdgst": ${hdgst:-false}, 00:23:00.075 "ddgst": ${ddgst:-false} 00:23:00.075 }, 00:23:00.075 "method": "bdev_nvme_attach_controller" 00:23:00.075 } 00:23:00.075 EOF 00:23:00.075 )") 00:23:00.075 15:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:23:00.075 15:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:23:00.075 15:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:23:00.075 15:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:00.075 "params": { 00:23:00.075 "name": "Nvme1", 00:23:00.075 "trtype": "tcp", 00:23:00.075 "traddr": "10.0.0.2", 00:23:00.075 "adrfam": "ipv4", 00:23:00.075 "trsvcid": "4420", 00:23:00.075 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.075 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:00.075 "hdgst": false, 00:23:00.075 "ddgst": false 00:23:00.075 }, 00:23:00.075 "method": "bdev_nvme_attach_controller" 00:23:00.075 }' 00:23:00.075 [2024-05-15 15:41:13.057858] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:23:00.075 [2024-05-15 15:41:13.057945] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1348418 ] 00:23:00.075 [2024-05-15 15:41:13.101871] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:00.075 [2024-05-15 15:41:13.126752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:00.332 [2024-05-15 15:41:13.215370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.332 [2024-05-15 15:41:13.215421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.332 [2024-05-15 15:41:13.215425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.588 I/O targets: 00:23:00.589 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:00.589 00:23:00.589 00:23:00.589 CUnit - A unit testing framework for C - Version 2.1-3 00:23:00.589 http://cunit.sourceforge.net/ 00:23:00.589 00:23:00.589 00:23:00.589 Suite: bdevio tests on: Nvme1n1 00:23:00.589 Test: blockdev write read block ...passed 00:23:00.589 Test: blockdev write zeroes read block ...passed 00:23:00.589 Test: blockdev write zeroes read no split ...passed 00:23:00.589 Test: blockdev write zeroes read split ...passed 00:23:00.846 Test: blockdev write zeroes read split partial ...passed 00:23:00.846 Test: blockdev reset ...[2024-05-15 15:41:13.734702] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:00.846 [2024-05-15 15:41:13.734811] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87720 (9): Bad file descriptor 00:23:00.846 [2024-05-15 15:41:13.796524] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:00.846 passed 00:23:00.846 Test: blockdev write read 8 blocks ...passed 00:23:00.846 Test: blockdev write read size > 128k ...passed 00:23:00.846 Test: blockdev write read invalid size ...passed 00:23:00.846 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:00.846 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:00.846 Test: blockdev write read max offset ...passed 00:23:00.846 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:00.846 Test: blockdev writev readv 8 blocks ...passed 00:23:00.846 Test: blockdev writev readv 30 x 1block ...passed 00:23:01.104 Test: blockdev writev readv block ...passed 00:23:01.104 Test: blockdev writev readv size > 128k ...passed 00:23:01.104 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:01.104 Test: blockdev comparev and writev ...[2024-05-15 15:41:13.972028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:01.104 [2024-05-15 15:41:13.972065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.104 [2024-05-15 15:41:13.972089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:01.104 [2024-05-15 15:41:13.972106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:01.104 [2024-05-15 15:41:13.972457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:01.104 [2024-05-15 15:41:13.972482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:01.104 [2024-05-15 15:41:13.972504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:01.104 [2024-05-15 15:41:13.972528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:01.104 [2024-05-15 15:41:13.972872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:01.104 [2024-05-15 15:41:13.972898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:01.104 [2024-05-15 15:41:13.972921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:01.104 [2024-05-15 15:41:13.972937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:01.104 [2024-05-15 15:41:13.973273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:01.104 [2024-05-15 15:41:13.973299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:01.104 [2024-05-15 15:41:13.973321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:01.104 [2024-05-15 15:41:13.973338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:01.104 passed 00:23:01.104 Test: blockdev nvme passthru rw ...passed 00:23:01.104 Test: blockdev nvme passthru vendor specific ...[2024-05-15 15:41:14.056543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:01.104 [2024-05-15 15:41:14.056574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:01.104 [2024-05-15 15:41:14.056767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:01.104 [2024-05-15 15:41:14.056792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:01.104 [2024-05-15 15:41:14.056978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:01.104 [2024-05-15 15:41:14.057003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:01.104 [2024-05-15 15:41:14.057189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:01.104 [2024-05-15 15:41:14.057213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:01.104 passed 00:23:01.104 Test: blockdev nvme admin passthru ...passed 00:23:01.104 Test: blockdev copy ...passed 00:23:01.104 00:23:01.104 Run Summary: Type Total Ran Passed Failed Inactive 00:23:01.104 suites 1 1 n/a 0 0 00:23:01.104 tests 23 23 23 0 0 00:23:01.104 asserts 152 152 152 0 n/a 00:23:01.104 00:23:01.104 Elapsed time = 1.179 seconds 00:23:01.362 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:01.362 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.362 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:01.362 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.362 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:01.362 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:01.362 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:01.362 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:23:01.362 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:01.362 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:23:01.362 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:01.362 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:01.362 rmmod nvme_tcp 00:23:01.362 rmmod nvme_fabrics 00:23:01.621 rmmod nvme_keyring 00:23:01.621 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:01.621 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:23:01.621 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:23:01.621 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1348393 ']' 00:23:01.621 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1348393 00:23:01.621 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 1348393 ']' 00:23:01.621 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 1348393 00:23:01.621 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:23:01.621 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:01.621 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1348393 00:23:01.621 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:23:01.621 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:23:01.621 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1348393' 00:23:01.621 killing process with pid 1348393 00:23:01.621 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 1348393 00:23:01.621 [2024-05-15 15:41:14.511865] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:01.621 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 1348393 00:23:01.880 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:01.880 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:01.880 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:01.880 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:01.880 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:01.880 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.880 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:01.880 15:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.413 15:41:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:04.413 00:23:04.413 real 0m6.884s 00:23:04.413 user 0m10.956s 00:23:04.413 sys 0m2.828s 00:23:04.413 15:41:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:04.413 15:41:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:04.413 ************************************ 00:23:04.413 END TEST nvmf_bdevio_no_huge 00:23:04.413 ************************************ 00:23:04.413 15:41:16 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:04.413 15:41:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:04.413 15:41:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:04.413 15:41:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:04.413 ************************************ 00:23:04.413 START TEST nvmf_tls 00:23:04.413 ************************************ 00:23:04.413 15:41:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:04.413 * Looking for test storage... 00:23:04.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:23:04.413 15:41:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:06.940 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:06.940 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:06.940 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:06.941 Found net devices under 0000:09:00.0: cvl_0_0 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:06.941 Found net devices under 0000:09:00.1: cvl_0_1 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:06.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:06.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:23:06.941 00:23:06.941 --- 10.0.0.2 ping statistics --- 00:23:06.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.941 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:06.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:06.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:23:06.941 00:23:06.941 --- 10.0.0.1 ping statistics --- 00:23:06.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.941 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1350905 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1350905 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1350905 ']' 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:06.941 15:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.941 [2024-05-15 15:41:19.838386] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:23:06.941 [2024-05-15 15:41:19.838463] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.941 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.941 [2024-05-15 15:41:19.882941] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:06.941 [2024-05-15 15:41:19.913696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.941 [2024-05-15 15:41:19.993220] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.941 [2024-05-15 15:41:19.993270] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.941 [2024-05-15 15:41:19.993294] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.941 [2024-05-15 15:41:19.993305] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.941 [2024-05-15 15:41:19.993315] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.941 [2024-05-15 15:41:19.993341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.199 15:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:07.199 15:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:07.199 15:41:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:07.199 15:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.199 15:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.199 15:41:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.199 15:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:23:07.199 15:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:07.456 true 00:23:07.456 15:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:07.456 15:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:23:07.714 15:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:23:07.714 15:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:23:07.714 15:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:07.971 15:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:07.971 15:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:23:07.971 15:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:23:07.971 15:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:23:07.971 15:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:08.229 15:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:08.229 15:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:23:08.487 15:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:23:08.487 15:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:23:08.487 15:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:08.744 15:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:23:08.744 15:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:23:08.744 15:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:23:08.744 15:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:09.002 15:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:09.002 15:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:23:09.259 15:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:23:09.259 15:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:23:09.259 15:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:09.517 15:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:09.517 15:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:23:09.773 15:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:23:09.773 15:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:23:09.773 15:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:09.773 15:41:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:09.773 15:41:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:09.773 15:41:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:09.773 15:41:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:09.773 15:41:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:09.773 15:41:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:10.031 15:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:10.031 15:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:10.031 15:41:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:10.031 15:41:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:10.031 15:41:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:10.031 15:41:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:23:10.031 15:41:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:10.031 15:41:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:10.031 15:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:10.031 15:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:23:10.031 15:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.SbD501IbQF 00:23:10.031 15:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:10.031 15:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.I2l6QHZLnU 00:23:10.031 15:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:10.031 15:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:10.031 15:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.SbD501IbQF 00:23:10.031 15:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.I2l6QHZLnU 00:23:10.031 15:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:10.288 15:41:23 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:10.545 15:41:23 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.SbD501IbQF 00:23:10.545 15:41:23 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.SbD501IbQF 00:23:10.545 15:41:23 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:10.803 [2024-05-15 15:41:23.754003] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.803 15:41:23 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:11.060 15:41:24 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:11.317 [2024-05-15 15:41:24.239244] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:11.317 [2024-05-15 15:41:24.239343] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:11.317 [2024-05-15 15:41:24.239590] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.317 15:41:24 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:11.575 malloc0 00:23:11.575 15:41:24 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:11.832 15:41:24 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SbD501IbQF 00:23:12.090 [2024-05-15 15:41:24.977553] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:12.090 15:41:24 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.SbD501IbQF 00:23:12.090 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.049 Initializing NVMe Controllers 00:23:22.049 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:22.049 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:22.049 Initialization complete. Launching workers. 00:23:22.049 ======================================================== 00:23:22.049 Latency(us) 00:23:22.049 Device Information : IOPS MiB/s Average min max 00:23:22.049 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7396.98 28.89 8655.06 1427.77 10732.70 00:23:22.049 ======================================================== 00:23:22.049 Total : 7396.98 28.89 8655.06 1427.77 10732.70 00:23:22.049 00:23:22.049 15:41:35 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SbD501IbQF 00:23:22.049 15:41:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:22.049 15:41:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:22.049 15:41:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:22.050 15:41:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.SbD501IbQF' 00:23:22.050 15:41:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:22.050 15:41:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1352668 00:23:22.050 15:41:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:22.050 15:41:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:22.050 15:41:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1352668 /var/tmp/bdevperf.sock 00:23:22.050 15:41:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1352668 ']' 00:23:22.050 15:41:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.050 15:41:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:22.050 15:41:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.050 15:41:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:22.050 15:41:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.309 [2024-05-15 15:41:35.155326] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:23:22.309 [2024-05-15 15:41:35.155413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1352668 ] 00:23:22.309 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.309 [2024-05-15 15:41:35.193853] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:22.309 [2024-05-15 15:41:35.227473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.309 [2024-05-15 15:41:35.313729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.615 15:41:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:22.615 15:41:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:22.615 15:41:35 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SbD501IbQF 00:23:22.615 [2024-05-15 15:41:35.641679] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:22.615 [2024-05-15 15:41:35.641797] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:22.873 TLSTESTn1 00:23:22.873 15:41:35 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:22.873 Running I/O for 10 seconds... 00:23:32.833 00:23:32.833 Latency(us) 00:23:32.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.833 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:32.833 Verification LBA range: start 0x0 length 0x2000 00:23:32.833 TLSTESTn1 : 10.03 3585.72 14.01 0.00 0.00 35627.98 5922.51 45632.47 00:23:32.833 =================================================================================================================== 00:23:32.833 Total : 3585.72 14.01 0.00 0.00 35627.98 5922.51 45632.47 00:23:32.833 0 00:23:32.834 15:41:45 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:32.834 15:41:45 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1352668 00:23:32.834 15:41:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1352668 ']' 00:23:32.834 15:41:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1352668 00:23:32.834 15:41:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:32.834 15:41:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:32.834 15:41:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1352668 00:23:32.834 15:41:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:32.834 15:41:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:32.834 15:41:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1352668' 00:23:32.834 killing process with pid 1352668 00:23:32.834 15:41:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1352668 00:23:32.834 Received shutdown signal, test time was about 10.000000 seconds 00:23:32.834 00:23:32.834 Latency(us) 00:23:32.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.834 =================================================================================================================== 00:23:32.834 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:32.834 [2024-05-15 15:41:45.930867] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:32.834 15:41:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1352668 00:23:33.092 15:41:46 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.I2l6QHZLnU 00:23:33.092 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:33.092 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.I2l6QHZLnU 00:23:33.092 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:33.092 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:33.092 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:33.092 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:33.092 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.I2l6QHZLnU 00:23:33.092 15:41:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:33.092 15:41:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:33.092 15:41:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:33.092 15:41:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.I2l6QHZLnU' 00:23:33.092 15:41:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:33.092 15:41:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1353986 00:23:33.092 15:41:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:33.092 15:41:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:33.092 15:41:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1353986 /var/tmp/bdevperf.sock 00:23:33.092 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1353986 ']' 00:23:33.092 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.092 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:33.092 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.092 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:33.092 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.350 [2024-05-15 15:41:46.204274] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:23:33.350 [2024-05-15 15:41:46.204364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1353986 ] 00:23:33.350 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.350 [2024-05-15 15:41:46.240155] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:33.350 [2024-05-15 15:41:46.271416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.350 [2024-05-15 15:41:46.351778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.608 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:33.608 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:33.608 15:41:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.I2l6QHZLnU 00:23:33.865 [2024-05-15 15:41:46.733439] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:33.865 [2024-05-15 15:41:46.733568] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:33.865 [2024-05-15 15:41:46.741687] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:33.865 [2024-05-15 15:41:46.742387] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4b2f0 (107): Transport endpoint is not connected 00:23:33.865 [2024-05-15 15:41:46.743376] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4b2f0 (9): Bad file descriptor 00:23:33.865 [2024-05-15 15:41:46.744375] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.865 [2024-05-15 15:41:46.744397] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:33.865 [2024-05-15 15:41:46.744414] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.865 request: 00:23:33.865 { 00:23:33.865 "name": "TLSTEST", 00:23:33.865 "trtype": "tcp", 00:23:33.865 "traddr": "10.0.0.2", 00:23:33.865 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:33.865 "adrfam": "ipv4", 00:23:33.865 "trsvcid": "4420", 00:23:33.865 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.865 "psk": "/tmp/tmp.I2l6QHZLnU", 00:23:33.865 "method": "bdev_nvme_attach_controller", 00:23:33.865 "req_id": 1 00:23:33.865 } 00:23:33.865 Got JSON-RPC error response 00:23:33.865 response: 00:23:33.865 { 00:23:33.865 "code": -32602, 00:23:33.865 "message": "Invalid parameters" 00:23:33.865 } 00:23:33.865 15:41:46 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1353986 00:23:33.865 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1353986 ']' 00:23:33.865 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1353986 00:23:33.865 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:33.865 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:33.865 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1353986 00:23:33.865 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:33.865 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:33.865 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1353986' 00:23:33.865 killing process with pid 1353986 00:23:33.865 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1353986 00:23:33.865 Received shutdown signal, test time was about 10.000000 seconds 00:23:33.865 00:23:33.865 Latency(us) 00:23:33.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.865 =================================================================================================================== 00:23:33.865 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:33.865 [2024-05-15 15:41:46.793333] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:33.865 15:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1353986 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.SbD501IbQF 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.SbD501IbQF 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.SbD501IbQF 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.SbD501IbQF' 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1354126 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1354126 /var/tmp/bdevperf.sock 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1354126 ']' 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:34.122 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.122 [2024-05-15 15:41:47.053071] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:23:34.123 [2024-05-15 15:41:47.053159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1354126 ] 00:23:34.123 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.123 [2024-05-15 15:41:47.089030] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:34.123 [2024-05-15 15:41:47.121281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.123 [2024-05-15 15:41:47.203358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.380 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:34.380 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:34.380 15:41:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.SbD501IbQF 00:23:34.638 [2024-05-15 15:41:47.584376] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:34.638 [2024-05-15 15:41:47.584491] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:34.638 [2024-05-15 15:41:47.593766] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:34.638 [2024-05-15 15:41:47.593797] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:34.638 [2024-05-15 15:41:47.593848] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:34.638 [2024-05-15 15:41:47.594379] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6782f0 (107): Transport endpoint is not connected 00:23:34.638 [2024-05-15 15:41:47.595368] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6782f0 (9): Bad file descriptor 00:23:34.638 [2024-05-15 15:41:47.596368] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.638 [2024-05-15 15:41:47.596390] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:34.638 [2024-05-15 15:41:47.596408] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.638 request: 00:23:34.638 { 00:23:34.638 "name": "TLSTEST", 00:23:34.638 "trtype": "tcp", 00:23:34.638 "traddr": "10.0.0.2", 00:23:34.638 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:34.638 "adrfam": "ipv4", 00:23:34.638 "trsvcid": "4420", 00:23:34.638 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.638 "psk": "/tmp/tmp.SbD501IbQF", 00:23:34.638 "method": "bdev_nvme_attach_controller", 00:23:34.638 "req_id": 1 00:23:34.638 } 00:23:34.638 Got JSON-RPC error response 00:23:34.638 response: 00:23:34.638 { 00:23:34.638 "code": -32602, 00:23:34.638 "message": "Invalid parameters" 00:23:34.638 } 00:23:34.638 15:41:47 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1354126 00:23:34.638 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1354126 ']' 00:23:34.638 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1354126 00:23:34.638 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:34.638 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:34.638 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1354126 00:23:34.638 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:34.638 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:34.638 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1354126' 00:23:34.638 killing process with pid 1354126 00:23:34.638 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1354126 00:23:34.638 Received shutdown signal, test time was about 10.000000 seconds 00:23:34.638 00:23:34.638 Latency(us) 00:23:34.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.638 =================================================================================================================== 00:23:34.638 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:34.638 [2024-05-15 15:41:47.648759] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:34.638 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1354126 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.SbD501IbQF 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.SbD501IbQF 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.SbD501IbQF 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.SbD501IbQF' 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1354257 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1354257 /var/tmp/bdevperf.sock 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1354257 ']' 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:34.896 15:41:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.896 [2024-05-15 15:41:47.902308] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:23:34.896 [2024-05-15 15:41:47.902399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1354257 ] 00:23:34.896 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.896 [2024-05-15 15:41:47.938539] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:34.896 [2024-05-15 15:41:47.969707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.154 [2024-05-15 15:41:48.051242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.154 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:35.154 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:35.154 15:41:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SbD501IbQF 00:23:35.411 [2024-05-15 15:41:48.375675] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:35.411 [2024-05-15 15:41:48.375797] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:35.411 [2024-05-15 15:41:48.385799] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:35.411 [2024-05-15 15:41:48.385828] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:35.411 [2024-05-15 15:41:48.385925] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:35.411 [2024-05-15 15:41:48.386649] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe522f0 (107): Transport endpoint is not connected 00:23:35.411 [2024-05-15 15:41:48.387623] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe522f0 (9): Bad file descriptor 00:23:35.411 [2024-05-15 15:41:48.388623] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:35.411 [2024-05-15 15:41:48.388644] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:35.411 [2024-05-15 15:41:48.388663] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:35.411 request: 00:23:35.411 { 00:23:35.411 "name": "TLSTEST", 00:23:35.411 "trtype": "tcp", 00:23:35.411 "traddr": "10.0.0.2", 00:23:35.411 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:35.411 "adrfam": "ipv4", 00:23:35.411 "trsvcid": "4420", 00:23:35.411 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:35.411 "psk": "/tmp/tmp.SbD501IbQF", 00:23:35.411 "method": "bdev_nvme_attach_controller", 00:23:35.411 "req_id": 1 00:23:35.411 } 00:23:35.411 Got JSON-RPC error response 00:23:35.411 response: 00:23:35.411 { 00:23:35.411 "code": -32602, 00:23:35.411 "message": "Invalid parameters" 00:23:35.411 } 00:23:35.411 15:41:48 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1354257 00:23:35.411 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1354257 ']' 00:23:35.411 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1354257 00:23:35.411 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:35.411 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:35.411 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1354257 00:23:35.411 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:35.411 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:35.411 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1354257' 00:23:35.411 killing process with pid 1354257 00:23:35.411 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1354257 00:23:35.411 Received shutdown signal, test time was about 10.000000 seconds 00:23:35.411 00:23:35.411 Latency(us) 00:23:35.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.411 =================================================================================================================== 00:23:35.411 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:35.411 [2024-05-15 15:41:48.438645] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:35.411 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1354257 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1354273 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1354273 /var/tmp/bdevperf.sock 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1354273 ']' 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:35.669 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.669 [2024-05-15 15:41:48.705268] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:23:35.669 [2024-05-15 15:41:48.705361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1354273 ] 00:23:35.669 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.669 [2024-05-15 15:41:48.740854] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:35.928 [2024-05-15 15:41:48.772462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.928 [2024-05-15 15:41:48.858333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.928 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:35.928 15:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:35.928 15:41:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:36.186 [2024-05-15 15:41:49.188857] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:36.186 [2024-05-15 15:41:49.190739] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d0900 (9): Bad file descriptor 00:23:36.186 [2024-05-15 15:41:49.191735] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.186 [2024-05-15 15:41:49.191756] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:36.186 [2024-05-15 15:41:49.191784] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.186 request: 00:23:36.186 { 00:23:36.186 "name": "TLSTEST", 00:23:36.186 "trtype": "tcp", 00:23:36.186 "traddr": "10.0.0.2", 00:23:36.186 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:36.186 "adrfam": "ipv4", 00:23:36.186 "trsvcid": "4420", 00:23:36.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.186 "method": "bdev_nvme_attach_controller", 00:23:36.186 "req_id": 1 00:23:36.186 } 00:23:36.186 Got JSON-RPC error response 00:23:36.186 response: 00:23:36.186 { 00:23:36.186 "code": -32602, 00:23:36.186 "message": "Invalid parameters" 00:23:36.186 } 00:23:36.186 15:41:49 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1354273 00:23:36.186 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1354273 ']' 00:23:36.186 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1354273 00:23:36.186 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:36.186 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:36.186 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1354273 00:23:36.186 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:36.186 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:36.186 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1354273' 00:23:36.186 killing process with pid 1354273 00:23:36.186 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1354273 00:23:36.186 Received shutdown signal, test time was about 10.000000 seconds 00:23:36.186 00:23:36.186 Latency(us) 00:23:36.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.186 =================================================================================================================== 00:23:36.186 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:36.186 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1354273 00:23:36.443 15:41:49 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:36.443 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:36.443 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:36.443 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:36.443 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:36.443 15:41:49 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1350905 00:23:36.443 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1350905 ']' 00:23:36.443 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1350905 00:23:36.443 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:36.443 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:36.444 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1350905 00:23:36.444 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:36.444 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:36.444 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1350905' 00:23:36.444 killing process with pid 1350905 00:23:36.444 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1350905 00:23:36.444 [2024-05-15 15:41:49.484364] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:36.444 [2024-05-15 15:41:49.484408] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:36.444 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1350905 00:23:36.701 15:41:49 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:36.701 15:41:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:36.701 15:41:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:36.701 15:41:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:36.701 15:41:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:36.701 15:41:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:36.701 15:41:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:36.701 15:41:49 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:36.701 15:41:49 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:36.701 15:41:49 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.cFRcUV7Rs5 00:23:36.701 15:41:49 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:36.701 15:41:49 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.cFRcUV7Rs5 00:23:36.702 15:41:49 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:36.702 15:41:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:36.702 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:36.702 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.702 15:41:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1354423 00:23:36.702 15:41:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:36.702 15:41:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1354423 00:23:36.702 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1354423 ']' 00:23:36.702 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.702 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:36.702 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.702 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:36.702 15:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.960 [2024-05-15 15:41:49.833943] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:23:36.960 [2024-05-15 15:41:49.834040] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.960 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.960 [2024-05-15 15:41:49.876720] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:36.960 [2024-05-15 15:41:49.913816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.960 [2024-05-15 15:41:50.001837] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.960 [2024-05-15 15:41:50.001907] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.960 [2024-05-15 15:41:50.001930] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.960 [2024-05-15 15:41:50.001953] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.960 [2024-05-15 15:41:50.001973] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.960 [2024-05-15 15:41:50.002013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.218 15:41:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:37.218 15:41:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:37.218 15:41:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:37.218 15:41:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:37.218 15:41:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.218 15:41:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.218 15:41:50 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.cFRcUV7Rs5 00:23:37.218 15:41:50 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.cFRcUV7Rs5 00:23:37.218 15:41:50 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:37.476 [2024-05-15 15:41:50.412855] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.476 15:41:50 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:37.734 15:41:50 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:37.992 [2024-05-15 15:41:50.914176] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:37.992 [2024-05-15 15:41:50.914280] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:37.992 [2024-05-15 15:41:50.914522] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.992 15:41:50 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:38.250 malloc0 00:23:38.250 15:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:38.508 15:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cFRcUV7Rs5 00:23:38.766 [2024-05-15 15:41:51.639663] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:38.766 15:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cFRcUV7Rs5 00:23:38.766 15:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:38.766 15:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:38.766 15:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:38.766 15:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.cFRcUV7Rs5' 00:23:38.766 15:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:38.766 15:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1354707 00:23:38.766 15:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:38.766 15:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:38.766 15:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1354707 /var/tmp/bdevperf.sock 00:23:38.766 15:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1354707 ']' 00:23:38.766 15:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.766 15:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:38.766 15:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.766 15:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:38.766 15:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.766 [2024-05-15 15:41:51.701744] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:23:38.766 [2024-05-15 15:41:51.701828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1354707 ] 00:23:38.766 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.766 [2024-05-15 15:41:51.737458] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:38.766 [2024-05-15 15:41:51.768742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.766 [2024-05-15 15:41:51.852831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.024 15:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:39.024 15:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:39.024 15:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cFRcUV7Rs5 00:23:39.280 [2024-05-15 15:41:52.174153] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.280 [2024-05-15 15:41:52.174294] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:39.280 TLSTESTn1 00:23:39.280 15:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:39.280 Running I/O for 10 seconds... 00:23:51.494 00:23:51.494 Latency(us) 00:23:51.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.494 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:51.494 Verification LBA range: start 0x0 length 0x2000 00:23:51.494 TLSTESTn1 : 10.03 3545.94 13.85 0.00 0.00 36025.33 5995.33 44467.39 00:23:51.494 =================================================================================================================== 00:23:51.494 Total : 3545.94 13.85 0.00 0.00 36025.33 5995.33 44467.39 00:23:51.494 0 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1354707 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1354707 ']' 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1354707 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1354707 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1354707' 00:23:51.494 killing process with pid 1354707 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1354707 00:23:51.494 Received shutdown signal, test time was about 10.000000 seconds 00:23:51.494 00:23:51.494 Latency(us) 00:23:51.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.494 =================================================================================================================== 00:23:51.494 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:51.494 [2024-05-15 15:42:02.457044] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1354707 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.cFRcUV7Rs5 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cFRcUV7Rs5 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cFRcUV7Rs5 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cFRcUV7Rs5 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.cFRcUV7Rs5' 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1356128 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1356128 /var/tmp/bdevperf.sock 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1356128 ']' 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.494 [2024-05-15 15:42:02.726891] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:23:51.494 [2024-05-15 15:42:02.726980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1356128 ] 00:23:51.494 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.494 [2024-05-15 15:42:02.764248] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:51.494 [2024-05-15 15:42:02.796033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.494 [2024-05-15 15:42:02.876129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:51.494 15:42:02 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cFRcUV7Rs5 00:23:51.494 [2024-05-15 15:42:03.220427] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.494 [2024-05-15 15:42:03.220519] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:51.494 [2024-05-15 15:42:03.220533] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.cFRcUV7Rs5 00:23:51.494 request: 00:23:51.494 { 00:23:51.494 "name": "TLSTEST", 00:23:51.494 "trtype": "tcp", 00:23:51.494 "traddr": "10.0.0.2", 00:23:51.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:51.494 "adrfam": "ipv4", 00:23:51.494 "trsvcid": "4420", 00:23:51.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.494 "psk": "/tmp/tmp.cFRcUV7Rs5", 00:23:51.494 "method": "bdev_nvme_attach_controller", 00:23:51.494 "req_id": 1 00:23:51.494 } 00:23:51.494 Got JSON-RPC error response 00:23:51.494 response: 00:23:51.494 { 00:23:51.494 "code": -1, 00:23:51.494 "message": "Operation not permitted" 00:23:51.494 } 00:23:51.494 15:42:03 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1356128 00:23:51.494 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1356128 ']' 00:23:51.494 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1356128 00:23:51.494 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1356128 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1356128' 00:23:51.495 killing process with pid 1356128 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1356128 00:23:51.495 Received shutdown signal, test time was about 10.000000 seconds 00:23:51.495 00:23:51.495 Latency(us) 00:23:51.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.495 =================================================================================================================== 00:23:51.495 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1356128 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1354423 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1354423 ']' 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1354423 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1354423 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1354423' 00:23:51.495 killing process with pid 1354423 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1354423 00:23:51.495 [2024-05-15 15:42:03.522716] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:51.495 [2024-05-15 15:42:03.522776] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1354423 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1356275 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1356275 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1356275 ']' 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:51.495 15:42:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.495 [2024-05-15 15:42:03.806030] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:23:51.495 [2024-05-15 15:42:03.806124] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.495 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.495 [2024-05-15 15:42:03.849975] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:51.495 [2024-05-15 15:42:03.880952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.495 [2024-05-15 15:42:03.960959] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.495 [2024-05-15 15:42:03.961012] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.495 [2024-05-15 15:42:03.961044] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.495 [2024-05-15 15:42:03.961055] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.495 [2024-05-15 15:42:03.961065] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.495 [2024-05-15 15:42:03.961097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.495 15:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:51.495 15:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:51.495 15:42:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:51.495 15:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:51.495 15:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.495 15:42:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.495 15:42:04 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.cFRcUV7Rs5 00:23:51.495 15:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:51.495 15:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.cFRcUV7Rs5 00:23:51.495 15:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:51.495 15:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:51.495 15:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:51.495 15:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:51.495 15:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.cFRcUV7Rs5 00:23:51.495 15:42:04 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.cFRcUV7Rs5 00:23:51.495 15:42:04 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:51.495 [2024-05-15 15:42:04.351728] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.495 15:42:04 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:51.755 15:42:04 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:52.040 [2024-05-15 15:42:04.881132] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:52.040 [2024-05-15 15:42:04.881240] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:52.040 [2024-05-15 15:42:04.881488] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.040 15:42:04 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:52.296 malloc0 00:23:52.296 15:42:05 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:52.553 15:42:05 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cFRcUV7Rs5 00:23:52.811 [2024-05-15 15:42:05.731601] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:52.811 [2024-05-15 15:42:05.731645] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:52.811 [2024-05-15 15:42:05.731685] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:52.811 request: 00:23:52.811 { 00:23:52.811 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.811 "host": "nqn.2016-06.io.spdk:host1", 00:23:52.811 "psk": "/tmp/tmp.cFRcUV7Rs5", 00:23:52.811 "method": "nvmf_subsystem_add_host", 00:23:52.811 "req_id": 1 00:23:52.811 } 00:23:52.811 Got JSON-RPC error response 00:23:52.811 response: 00:23:52.811 { 00:23:52.811 "code": -32603, 00:23:52.811 "message": "Internal error" 00:23:52.811 } 00:23:52.811 15:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:52.811 15:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:52.811 15:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:52.811 15:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:52.811 15:42:05 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1356275 00:23:52.811 15:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1356275 ']' 00:23:52.811 15:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1356275 00:23:52.811 15:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:52.811 15:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:52.811 15:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1356275 00:23:52.811 15:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:52.811 15:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:52.811 15:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1356275' 00:23:52.811 killing process with pid 1356275 00:23:52.811 15:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1356275 00:23:52.811 [2024-05-15 15:42:05.773139] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:52.811 15:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1356275 00:23:53.068 15:42:05 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.cFRcUV7Rs5 00:23:53.068 15:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:53.068 15:42:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:53.068 15:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:53.068 15:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.068 15:42:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1356574 00:23:53.068 15:42:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:53.068 15:42:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1356574 00:23:53.068 15:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1356574 ']' 00:23:53.068 15:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.068 15:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:53.068 15:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.068 15:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:53.068 15:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.068 [2024-05-15 15:42:06.054089] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:23:53.068 [2024-05-15 15:42:06.054174] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.068 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.068 [2024-05-15 15:42:06.095581] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:53.068 [2024-05-15 15:42:06.132652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.325 [2024-05-15 15:42:06.225157] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.325 [2024-05-15 15:42:06.225235] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.325 [2024-05-15 15:42:06.225254] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.325 [2024-05-15 15:42:06.225268] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.325 [2024-05-15 15:42:06.225280] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.325 [2024-05-15 15:42:06.225322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.325 15:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:53.325 15:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:53.325 15:42:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:53.325 15:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:53.325 15:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.325 15:42:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.325 15:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.cFRcUV7Rs5 00:23:53.325 15:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.cFRcUV7Rs5 00:23:53.325 15:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:53.583 [2024-05-15 15:42:06.642509] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.583 15:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:53.840 15:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:54.097 [2024-05-15 15:42:07.155882] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:54.097 [2024-05-15 15:42:07.155977] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:54.097 [2024-05-15 15:42:07.156242] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.097 15:42:07 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:54.354 malloc0 00:23:54.354 15:42:07 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:54.612 15:42:07 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cFRcUV7Rs5 00:23:54.870 [2024-05-15 15:42:07.904477] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:54.870 15:42:07 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1356893 00:23:54.870 15:42:07 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:54.870 15:42:07 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:54.870 15:42:07 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1356893 /var/tmp/bdevperf.sock 00:23:54.870 15:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1356893 ']' 00:23:54.870 15:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.870 15:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:54.870 15:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.870 15:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:54.870 15:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.870 [2024-05-15 15:42:07.962070] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:23:54.870 [2024-05-15 15:42:07.962152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1356893 ] 00:23:55.127 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.127 [2024-05-15 15:42:08.001881] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:55.127 [2024-05-15 15:42:08.035495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.127 [2024-05-15 15:42:08.122251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.127 15:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:55.384 15:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:55.384 15:42:08 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cFRcUV7Rs5 00:23:55.384 [2024-05-15 15:42:08.473325] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:55.384 [2024-05-15 15:42:08.473444] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:55.641 TLSTESTn1 00:23:55.641 15:42:08 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:55.898 15:42:08 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:55.898 "subsystems": [ 00:23:55.898 { 00:23:55.898 "subsystem": "keyring", 00:23:55.898 "config": [] 00:23:55.898 }, 00:23:55.898 { 00:23:55.898 "subsystem": "iobuf", 00:23:55.898 "config": [ 00:23:55.898 { 00:23:55.898 "method": "iobuf_set_options", 00:23:55.898 "params": { 00:23:55.898 "small_pool_count": 8192, 00:23:55.898 "large_pool_count": 1024, 00:23:55.898 "small_bufsize": 8192, 00:23:55.898 "large_bufsize": 135168 00:23:55.898 } 00:23:55.898 } 00:23:55.898 ] 00:23:55.898 }, 00:23:55.898 { 00:23:55.898 "subsystem": "sock", 00:23:55.898 "config": [ 00:23:55.898 { 00:23:55.898 "method": "sock_impl_set_options", 00:23:55.898 "params": { 00:23:55.898 "impl_name": "posix", 00:23:55.898 "recv_buf_size": 2097152, 00:23:55.898 "send_buf_size": 2097152, 00:23:55.898 "enable_recv_pipe": true, 00:23:55.898 "enable_quickack": false, 00:23:55.898 "enable_placement_id": 0, 00:23:55.898 "enable_zerocopy_send_server": true, 00:23:55.898 "enable_zerocopy_send_client": false, 00:23:55.898 "zerocopy_threshold": 0, 00:23:55.898 "tls_version": 0, 00:23:55.898 "enable_ktls": false 00:23:55.898 } 00:23:55.898 }, 00:23:55.898 { 00:23:55.898 "method": "sock_impl_set_options", 00:23:55.898 "params": { 00:23:55.898 "impl_name": "ssl", 00:23:55.898 "recv_buf_size": 4096, 00:23:55.898 "send_buf_size": 4096, 00:23:55.898 "enable_recv_pipe": true, 00:23:55.898 "enable_quickack": false, 00:23:55.898 "enable_placement_id": 0, 00:23:55.898 "enable_zerocopy_send_server": true, 00:23:55.898 "enable_zerocopy_send_client": false, 00:23:55.898 "zerocopy_threshold": 0, 00:23:55.898 "tls_version": 0, 00:23:55.898 "enable_ktls": false 00:23:55.898 } 00:23:55.898 } 00:23:55.898 ] 00:23:55.898 }, 00:23:55.898 { 00:23:55.898 "subsystem": "vmd", 00:23:55.898 "config": [] 00:23:55.898 }, 00:23:55.898 { 00:23:55.898 "subsystem": "accel", 00:23:55.898 "config": [ 00:23:55.898 { 00:23:55.898 "method": "accel_set_options", 00:23:55.898 "params": { 00:23:55.898 "small_cache_size": 128, 00:23:55.899 "large_cache_size": 16, 00:23:55.899 "task_count": 2048, 00:23:55.899 "sequence_count": 2048, 00:23:55.899 "buf_count": 2048 00:23:55.899 } 00:23:55.899 } 00:23:55.899 ] 00:23:55.899 }, 00:23:55.899 { 00:23:55.899 "subsystem": "bdev", 00:23:55.899 "config": [ 00:23:55.899 { 00:23:55.899 "method": "bdev_set_options", 00:23:55.899 "params": { 00:23:55.899 "bdev_io_pool_size": 65535, 00:23:55.899 "bdev_io_cache_size": 256, 00:23:55.899 "bdev_auto_examine": true, 00:23:55.899 "iobuf_small_cache_size": 128, 00:23:55.899 "iobuf_large_cache_size": 16 00:23:55.899 } 00:23:55.899 }, 00:23:55.899 { 00:23:55.899 "method": "bdev_raid_set_options", 00:23:55.899 "params": { 00:23:55.899 "process_window_size_kb": 1024 00:23:55.899 } 00:23:55.899 }, 00:23:55.899 { 00:23:55.899 "method": "bdev_iscsi_set_options", 00:23:55.899 "params": { 00:23:55.899 "timeout_sec": 30 00:23:55.899 } 00:23:55.899 }, 00:23:55.899 { 00:23:55.899 "method": "bdev_nvme_set_options", 00:23:55.899 "params": { 00:23:55.899 "action_on_timeout": "none", 00:23:55.899 "timeout_us": 0, 00:23:55.899 "timeout_admin_us": 0, 00:23:55.899 "keep_alive_timeout_ms": 10000, 00:23:55.899 "arbitration_burst": 0, 00:23:55.899 "low_priority_weight": 0, 00:23:55.899 "medium_priority_weight": 0, 00:23:55.899 "high_priority_weight": 0, 00:23:55.899 "nvme_adminq_poll_period_us": 10000, 00:23:55.899 "nvme_ioq_poll_period_us": 0, 00:23:55.899 "io_queue_requests": 0, 00:23:55.899 "delay_cmd_submit": true, 00:23:55.899 "transport_retry_count": 4, 00:23:55.899 "bdev_retry_count": 3, 00:23:55.899 "transport_ack_timeout": 0, 00:23:55.899 "ctrlr_loss_timeout_sec": 0, 00:23:55.899 "reconnect_delay_sec": 0, 00:23:55.899 "fast_io_fail_timeout_sec": 0, 00:23:55.899 "disable_auto_failback": false, 00:23:55.899 "generate_uuids": false, 00:23:55.899 "transport_tos": 0, 00:23:55.899 "nvme_error_stat": false, 00:23:55.899 "rdma_srq_size": 0, 00:23:55.899 "io_path_stat": false, 00:23:55.899 "allow_accel_sequence": false, 00:23:55.899 "rdma_max_cq_size": 0, 00:23:55.899 "rdma_cm_event_timeout_ms": 0, 00:23:55.899 "dhchap_digests": [ 00:23:55.899 "sha256", 00:23:55.899 "sha384", 00:23:55.899 "sha512" 00:23:55.899 ], 00:23:55.899 "dhchap_dhgroups": [ 00:23:55.899 "null", 00:23:55.899 "ffdhe2048", 00:23:55.899 "ffdhe3072", 00:23:55.899 "ffdhe4096", 00:23:55.899 "ffdhe6144", 00:23:55.899 "ffdhe8192" 00:23:55.899 ] 00:23:55.899 } 00:23:55.899 }, 00:23:55.899 { 00:23:55.899 "method": "bdev_nvme_set_hotplug", 00:23:55.899 "params": { 00:23:55.899 "period_us": 100000, 00:23:55.899 "enable": false 00:23:55.899 } 00:23:55.899 }, 00:23:55.899 { 00:23:55.899 "method": "bdev_malloc_create", 00:23:55.899 "params": { 00:23:55.899 "name": "malloc0", 00:23:55.899 "num_blocks": 8192, 00:23:55.899 "block_size": 4096, 00:23:55.899 "physical_block_size": 4096, 00:23:55.899 "uuid": "2436300a-0363-4233-9779-05ad88240922", 00:23:55.899 "optimal_io_boundary": 0 00:23:55.899 } 00:23:55.899 }, 00:23:55.899 { 00:23:55.899 "method": "bdev_wait_for_examine" 00:23:55.899 } 00:23:55.899 ] 00:23:55.899 }, 00:23:55.899 { 00:23:55.899 "subsystem": "nbd", 00:23:55.899 "config": [] 00:23:55.899 }, 00:23:55.899 { 00:23:55.899 "subsystem": "scheduler", 00:23:55.899 "config": [ 00:23:55.899 { 00:23:55.899 "method": "framework_set_scheduler", 00:23:55.899 "params": { 00:23:55.899 "name": "static" 00:23:55.899 } 00:23:55.899 } 00:23:55.899 ] 00:23:55.899 }, 00:23:55.899 { 00:23:55.899 "subsystem": "nvmf", 00:23:55.899 "config": [ 00:23:55.899 { 00:23:55.899 "method": "nvmf_set_config", 00:23:55.899 "params": { 00:23:55.899 "discovery_filter": "match_any", 00:23:55.899 "admin_cmd_passthru": { 00:23:55.899 "identify_ctrlr": false 00:23:55.899 } 00:23:55.899 } 00:23:55.899 }, 00:23:55.899 { 00:23:55.899 "method": "nvmf_set_max_subsystems", 00:23:55.899 "params": { 00:23:55.899 "max_subsystems": 1024 00:23:55.899 } 00:23:55.899 }, 00:23:55.899 { 00:23:55.899 "method": "nvmf_set_crdt", 00:23:55.899 "params": { 00:23:55.899 "crdt1": 0, 00:23:55.899 "crdt2": 0, 00:23:55.899 "crdt3": 0 00:23:55.899 } 00:23:55.899 }, 00:23:55.899 { 00:23:55.899 "method": "nvmf_create_transport", 00:23:55.899 "params": { 00:23:55.899 "trtype": "TCP", 00:23:55.899 "max_queue_depth": 128, 00:23:55.899 "max_io_qpairs_per_ctrlr": 127, 00:23:55.899 "in_capsule_data_size": 4096, 00:23:55.899 "max_io_size": 131072, 00:23:55.899 "io_unit_size": 131072, 00:23:55.899 "max_aq_depth": 128, 00:23:55.899 "num_shared_buffers": 511, 00:23:55.899 "buf_cache_size": 4294967295, 00:23:55.899 "dif_insert_or_strip": false, 00:23:55.899 "zcopy": false, 00:23:55.899 "c2h_success": false, 00:23:55.899 "sock_priority": 0, 00:23:55.899 "abort_timeout_sec": 1, 00:23:55.899 "ack_timeout": 0, 00:23:55.899 "data_wr_pool_size": 0 00:23:55.899 } 00:23:55.899 }, 00:23:55.899 { 00:23:55.899 "method": "nvmf_create_subsystem", 00:23:55.899 "params": { 00:23:55.899 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.899 "allow_any_host": false, 00:23:55.899 "serial_number": "SPDK00000000000001", 00:23:55.899 "model_number": "SPDK bdev Controller", 00:23:55.899 "max_namespaces": 10, 00:23:55.899 "min_cntlid": 1, 00:23:55.899 "max_cntlid": 65519, 00:23:55.899 "ana_reporting": false 00:23:55.899 } 00:23:55.899 }, 00:23:55.899 { 00:23:55.900 "method": "nvmf_subsystem_add_host", 00:23:55.900 "params": { 00:23:55.900 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.900 "host": "nqn.2016-06.io.spdk:host1", 00:23:55.900 "psk": "/tmp/tmp.cFRcUV7Rs5" 00:23:55.900 } 00:23:55.900 }, 00:23:55.900 { 00:23:55.900 "method": "nvmf_subsystem_add_ns", 00:23:55.900 "params": { 00:23:55.900 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.900 "namespace": { 00:23:55.900 "nsid": 1, 00:23:55.900 "bdev_name": "malloc0", 00:23:55.900 "nguid": "2436300A03634233977905AD88240922", 00:23:55.900 "uuid": "2436300a-0363-4233-9779-05ad88240922", 00:23:55.900 "no_auto_visible": false 00:23:55.900 } 00:23:55.900 } 00:23:55.900 }, 00:23:55.900 { 00:23:55.900 "method": "nvmf_subsystem_add_listener", 00:23:55.900 "params": { 00:23:55.900 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.900 "listen_address": { 00:23:55.900 "trtype": "TCP", 00:23:55.900 "adrfam": "IPv4", 00:23:55.900 "traddr": "10.0.0.2", 00:23:55.900 "trsvcid": "4420" 00:23:55.900 }, 00:23:55.900 "secure_channel": true 00:23:55.900 } 00:23:55.900 } 00:23:55.900 ] 00:23:55.900 } 00:23:55.900 ] 00:23:55.900 }' 00:23:55.900 15:42:08 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:56.159 15:42:09 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:56.159 "subsystems": [ 00:23:56.159 { 00:23:56.159 "subsystem": "keyring", 00:23:56.159 "config": [] 00:23:56.159 }, 00:23:56.159 { 00:23:56.159 "subsystem": "iobuf", 00:23:56.159 "config": [ 00:23:56.159 { 00:23:56.159 "method": "iobuf_set_options", 00:23:56.159 "params": { 00:23:56.159 "small_pool_count": 8192, 00:23:56.159 "large_pool_count": 1024, 00:23:56.159 "small_bufsize": 8192, 00:23:56.159 "large_bufsize": 135168 00:23:56.159 } 00:23:56.159 } 00:23:56.159 ] 00:23:56.159 }, 00:23:56.159 { 00:23:56.159 "subsystem": "sock", 00:23:56.159 "config": [ 00:23:56.159 { 00:23:56.159 "method": "sock_impl_set_options", 00:23:56.159 "params": { 00:23:56.159 "impl_name": "posix", 00:23:56.159 "recv_buf_size": 2097152, 00:23:56.159 "send_buf_size": 2097152, 00:23:56.159 "enable_recv_pipe": true, 00:23:56.159 "enable_quickack": false, 00:23:56.159 "enable_placement_id": 0, 00:23:56.159 "enable_zerocopy_send_server": true, 00:23:56.159 "enable_zerocopy_send_client": false, 00:23:56.159 "zerocopy_threshold": 0, 00:23:56.159 "tls_version": 0, 00:23:56.159 "enable_ktls": false 00:23:56.159 } 00:23:56.159 }, 00:23:56.159 { 00:23:56.159 "method": "sock_impl_set_options", 00:23:56.159 "params": { 00:23:56.159 "impl_name": "ssl", 00:23:56.159 "recv_buf_size": 4096, 00:23:56.159 "send_buf_size": 4096, 00:23:56.159 "enable_recv_pipe": true, 00:23:56.159 "enable_quickack": false, 00:23:56.159 "enable_placement_id": 0, 00:23:56.159 "enable_zerocopy_send_server": true, 00:23:56.159 "enable_zerocopy_send_client": false, 00:23:56.159 "zerocopy_threshold": 0, 00:23:56.159 "tls_version": 0, 00:23:56.159 "enable_ktls": false 00:23:56.159 } 00:23:56.159 } 00:23:56.159 ] 00:23:56.159 }, 00:23:56.159 { 00:23:56.159 "subsystem": "vmd", 00:23:56.159 "config": [] 00:23:56.159 }, 00:23:56.159 { 00:23:56.159 "subsystem": "accel", 00:23:56.159 "config": [ 00:23:56.159 { 00:23:56.159 "method": "accel_set_options", 00:23:56.159 "params": { 00:23:56.159 "small_cache_size": 128, 00:23:56.159 "large_cache_size": 16, 00:23:56.159 "task_count": 2048, 00:23:56.159 "sequence_count": 2048, 00:23:56.159 "buf_count": 2048 00:23:56.159 } 00:23:56.159 } 00:23:56.159 ] 00:23:56.159 }, 00:23:56.159 { 00:23:56.159 "subsystem": "bdev", 00:23:56.159 "config": [ 00:23:56.159 { 00:23:56.159 "method": "bdev_set_options", 00:23:56.159 "params": { 00:23:56.159 "bdev_io_pool_size": 65535, 00:23:56.159 "bdev_io_cache_size": 256, 00:23:56.159 "bdev_auto_examine": true, 00:23:56.159 "iobuf_small_cache_size": 128, 00:23:56.159 "iobuf_large_cache_size": 16 00:23:56.159 } 00:23:56.159 }, 00:23:56.159 { 00:23:56.159 "method": "bdev_raid_set_options", 00:23:56.159 "params": { 00:23:56.159 "process_window_size_kb": 1024 00:23:56.159 } 00:23:56.159 }, 00:23:56.159 { 00:23:56.159 "method": "bdev_iscsi_set_options", 00:23:56.160 "params": { 00:23:56.160 "timeout_sec": 30 00:23:56.160 } 00:23:56.160 }, 00:23:56.160 { 00:23:56.160 "method": "bdev_nvme_set_options", 00:23:56.160 "params": { 00:23:56.160 "action_on_timeout": "none", 00:23:56.160 "timeout_us": 0, 00:23:56.160 "timeout_admin_us": 0, 00:23:56.160 "keep_alive_timeout_ms": 10000, 00:23:56.160 "arbitration_burst": 0, 00:23:56.160 "low_priority_weight": 0, 00:23:56.160 "medium_priority_weight": 0, 00:23:56.160 "high_priority_weight": 0, 00:23:56.160 "nvme_adminq_poll_period_us": 10000, 00:23:56.160 "nvme_ioq_poll_period_us": 0, 00:23:56.160 "io_queue_requests": 512, 00:23:56.160 "delay_cmd_submit": true, 00:23:56.160 "transport_retry_count": 4, 00:23:56.160 "bdev_retry_count": 3, 00:23:56.160 "transport_ack_timeout": 0, 00:23:56.160 "ctrlr_loss_timeout_sec": 0, 00:23:56.160 "reconnect_delay_sec": 0, 00:23:56.160 "fast_io_fail_timeout_sec": 0, 00:23:56.160 "disable_auto_failback": false, 00:23:56.160 "generate_uuids": false, 00:23:56.160 "transport_tos": 0, 00:23:56.160 "nvme_error_stat": false, 00:23:56.160 "rdma_srq_size": 0, 00:23:56.160 "io_path_stat": false, 00:23:56.160 "allow_accel_sequence": false, 00:23:56.160 "rdma_max_cq_size": 0, 00:23:56.160 "rdma_cm_event_timeout_ms": 0, 00:23:56.160 "dhchap_digests": [ 00:23:56.160 "sha256", 00:23:56.160 "sha384", 00:23:56.160 "sha512" 00:23:56.160 ], 00:23:56.160 "dhchap_dhgroups": [ 00:23:56.160 "null", 00:23:56.160 "ffdhe2048", 00:23:56.160 "ffdhe3072", 00:23:56.160 "ffdhe4096", 00:23:56.160 "ffdhe6144", 00:23:56.160 "ffdhe8192" 00:23:56.160 ] 00:23:56.160 } 00:23:56.160 }, 00:23:56.160 { 00:23:56.160 "method": "bdev_nvme_attach_controller", 00:23:56.160 "params": { 00:23:56.160 "name": "TLSTEST", 00:23:56.160 "trtype": "TCP", 00:23:56.160 "adrfam": "IPv4", 00:23:56.160 "traddr": "10.0.0.2", 00:23:56.160 "trsvcid": "4420", 00:23:56.160 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.160 "prchk_reftag": false, 00:23:56.160 "prchk_guard": false, 00:23:56.160 "ctrlr_loss_timeout_sec": 0, 00:23:56.160 "reconnect_delay_sec": 0, 00:23:56.160 "fast_io_fail_timeout_sec": 0, 00:23:56.160 "psk": "/tmp/tmp.cFRcUV7Rs5", 00:23:56.160 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:56.160 "hdgst": false, 00:23:56.160 "ddgst": false 00:23:56.160 } 00:23:56.160 }, 00:23:56.160 { 00:23:56.160 "method": "bdev_nvme_set_hotplug", 00:23:56.160 "params": { 00:23:56.160 "period_us": 100000, 00:23:56.160 "enable": false 00:23:56.160 } 00:23:56.160 }, 00:23:56.160 { 00:23:56.160 "method": "bdev_wait_for_examine" 00:23:56.160 } 00:23:56.160 ] 00:23:56.160 }, 00:23:56.160 { 00:23:56.160 "subsystem": "nbd", 00:23:56.160 "config": [] 00:23:56.160 } 00:23:56.160 ] 00:23:56.160 }' 00:23:56.160 15:42:09 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1356893 00:23:56.160 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1356893 ']' 00:23:56.160 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1356893 00:23:56.160 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:56.160 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:56.160 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1356893 00:23:56.160 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:56.160 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:56.160 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1356893' 00:23:56.160 killing process with pid 1356893 00:23:56.160 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1356893 00:23:56.160 Received shutdown signal, test time was about 10.000000 seconds 00:23:56.160 00:23:56.160 Latency(us) 00:23:56.160 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.160 =================================================================================================================== 00:23:56.160 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:56.160 [2024-05-15 15:42:09.226598] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:56.160 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1356893 00:23:56.417 15:42:09 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1356574 00:23:56.417 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1356574 ']' 00:23:56.417 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1356574 00:23:56.417 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:56.417 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:56.417 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1356574 00:23:56.417 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:56.417 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:56.417 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1356574' 00:23:56.417 killing process with pid 1356574 00:23:56.417 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1356574 00:23:56.417 [2024-05-15 15:42:09.465786] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:56.417 [2024-05-15 15:42:09.465839] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:56.417 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1356574 00:23:56.675 15:42:09 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:56.675 15:42:09 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:56.675 "subsystems": [ 00:23:56.675 { 00:23:56.675 "subsystem": "keyring", 00:23:56.675 "config": [] 00:23:56.675 }, 00:23:56.675 { 00:23:56.675 "subsystem": "iobuf", 00:23:56.675 "config": [ 00:23:56.675 { 00:23:56.675 "method": "iobuf_set_options", 00:23:56.675 "params": { 00:23:56.675 "small_pool_count": 8192, 00:23:56.675 "large_pool_count": 1024, 00:23:56.675 "small_bufsize": 8192, 00:23:56.675 "large_bufsize": 135168 00:23:56.675 } 00:23:56.675 } 00:23:56.675 ] 00:23:56.675 }, 00:23:56.675 { 00:23:56.675 "subsystem": "sock", 00:23:56.675 "config": [ 00:23:56.675 { 00:23:56.675 "method": "sock_impl_set_options", 00:23:56.675 "params": { 00:23:56.675 "impl_name": "posix", 00:23:56.675 "recv_buf_size": 2097152, 00:23:56.675 "send_buf_size": 2097152, 00:23:56.675 "enable_recv_pipe": true, 00:23:56.675 "enable_quickack": false, 00:23:56.675 "enable_placement_id": 0, 00:23:56.675 "enable_zerocopy_send_server": true, 00:23:56.675 "enable_zerocopy_send_client": false, 00:23:56.675 "zerocopy_threshold": 0, 00:23:56.675 "tls_version": 0, 00:23:56.675 "enable_ktls": false 00:23:56.675 } 00:23:56.675 }, 00:23:56.675 { 00:23:56.675 "method": "sock_impl_set_options", 00:23:56.675 "params": { 00:23:56.675 "impl_name": "ssl", 00:23:56.675 "recv_buf_size": 4096, 00:23:56.675 "send_buf_size": 4096, 00:23:56.675 "enable_recv_pipe": true, 00:23:56.675 "enable_quickack": false, 00:23:56.675 "enable_placement_id": 0, 00:23:56.675 "enable_zerocopy_send_server": true, 00:23:56.675 "enable_zerocopy_send_client": false, 00:23:56.675 "zerocopy_threshold": 0, 00:23:56.675 "tls_version": 0, 00:23:56.675 "enable_ktls": false 00:23:56.675 } 00:23:56.675 } 00:23:56.675 ] 00:23:56.675 }, 00:23:56.675 { 00:23:56.675 "subsystem": "vmd", 00:23:56.675 "config": [] 00:23:56.675 }, 00:23:56.675 { 00:23:56.675 "subsystem": "accel", 00:23:56.675 "config": [ 00:23:56.675 { 00:23:56.675 "method": "accel_set_options", 00:23:56.675 "params": { 00:23:56.675 "small_cache_size": 128, 00:23:56.675 "large_cache_size": 16, 00:23:56.675 "task_count": 2048, 00:23:56.675 "sequence_count": 2048, 00:23:56.675 "buf_count": 2048 00:23:56.675 } 00:23:56.675 } 00:23:56.675 ] 00:23:56.675 }, 00:23:56.675 { 00:23:56.675 "subsystem": "bdev", 00:23:56.675 "config": [ 00:23:56.675 { 00:23:56.675 "method": "bdev_set_options", 00:23:56.675 "params": { 00:23:56.675 "bdev_io_pool_size": 65535, 00:23:56.675 "bdev_io_cache_size": 256, 00:23:56.675 "bdev_auto_examine": true, 00:23:56.675 "iobuf_small_cache_size": 128, 00:23:56.675 "iobuf_large_cache_size": 16 00:23:56.675 } 00:23:56.675 }, 00:23:56.675 { 00:23:56.675 "method": "bdev_raid_set_options", 00:23:56.675 "params": { 00:23:56.675 "process_window_size_kb": 1024 00:23:56.675 } 00:23:56.675 }, 00:23:56.675 { 00:23:56.675 "method": "bdev_iscsi_set_options", 00:23:56.675 "params": { 00:23:56.675 "timeout_sec": 30 00:23:56.675 } 00:23:56.675 }, 00:23:56.675 { 00:23:56.675 "method": "bdev_nvme_set_options", 00:23:56.675 "params": { 00:23:56.675 "action_on_timeout": "none", 00:23:56.675 "timeout_us": 0, 00:23:56.675 "timeout_admin_us": 0, 00:23:56.675 "keep_alive_timeout_ms": 10000, 00:23:56.675 "arbitration_burst": 0, 00:23:56.675 "low_priority_weight": 0, 00:23:56.675 "medium_priority_weight": 0, 00:23:56.675 "high_priority_weight": 0, 00:23:56.675 "nvme_adminq_poll_period_us": 10000, 00:23:56.675 "nvme_ioq_poll_period_us": 0, 00:23:56.675 "io_queue_requests": 0, 00:23:56.675 "delay_cmd_submit": true, 00:23:56.675 "transport_retry_count": 4, 00:23:56.675 "bdev_retry_count": 3, 00:23:56.675 "transport_ack_timeout": 0, 00:23:56.675 "ctrlr_loss_timeout_sec": 0, 00:23:56.675 "reconnect_delay_sec": 0, 00:23:56.675 "fast_io_fail_timeout_sec": 0, 00:23:56.675 "disable_auto_failback": false, 00:23:56.675 "generate_uuids": false, 00:23:56.675 "transport_tos": 0, 00:23:56.675 "nvme_error_stat": false, 00:23:56.675 "rdma_srq_size": 0, 00:23:56.675 "io_path_stat": false, 00:23:56.675 "allow_accel_sequence": false, 00:23:56.675 "rdma_max_cq_size": 0, 00:23:56.675 "rdma_cm_event_timeout_ms": 0, 00:23:56.675 "dhchap_digests": [ 00:23:56.675 "sha256", 00:23:56.675 "sha384", 00:23:56.675 "sha512" 00:23:56.676 ], 00:23:56.676 "dhchap_dhgroups": [ 00:23:56.676 "null", 00:23:56.676 "ffdhe2048", 00:23:56.676 "ffdhe3072", 00:23:56.676 "ffdhe4096", 00:23:56.676 "ffdhe6144", 00:23:56.676 "ffdhe8192" 00:23:56.676 ] 00:23:56.676 } 00:23:56.676 }, 00:23:56.676 { 00:23:56.676 "method": "bdev_nvme_set_hotplug", 00:23:56.676 "params": { 00:23:56.676 "period_us": 100000, 00:23:56.676 "enable": false 00:23:56.676 } 00:23:56.676 }, 00:23:56.676 { 00:23:56.676 "method": "bdev_malloc_create", 00:23:56.676 "params": { 00:23:56.676 "name": "malloc0", 00:23:56.676 "num_blocks": 8192, 00:23:56.676 "block_size": 4096, 00:23:56.676 "physical_block_size": 4096, 00:23:56.676 "uuid": "2436300a-0363-4233-9779-05ad88240922", 00:23:56.676 "optimal_io_boundary": 0 00:23:56.676 } 00:23:56.676 }, 00:23:56.676 { 00:23:56.676 "method": "bdev_wait_for_examine" 00:23:56.676 } 00:23:56.676 ] 00:23:56.676 }, 00:23:56.676 { 00:23:56.676 "subsystem": "nbd", 00:23:56.676 "config": [] 00:23:56.676 }, 00:23:56.676 { 00:23:56.676 "subsystem": "scheduler", 00:23:56.676 "config": [ 00:23:56.676 { 00:23:56.676 "method": "framework_set_scheduler", 00:23:56.676 "params": { 00:23:56.676 "name": "static" 00:23:56.676 } 00:23:56.676 } 00:23:56.676 ] 00:23:56.676 }, 00:23:56.676 { 00:23:56.676 "subsystem": "nvmf", 00:23:56.676 "config": [ 00:23:56.676 { 00:23:56.676 "method": "nvmf_set_config", 00:23:56.676 "params": { 00:23:56.676 "discovery_filter": "match_any", 00:23:56.676 "admin_cmd_passthru": { 00:23:56.676 "identify_ctrlr": false 00:23:56.676 } 00:23:56.676 } 00:23:56.676 }, 00:23:56.676 { 00:23:56.676 "method": "nvmf_set_max_subsystems", 00:23:56.676 "params": { 00:23:56.676 "max_subsystems": 1024 00:23:56.676 } 00:23:56.676 }, 00:23:56.676 { 00:23:56.676 "method": "nvmf_set_crdt", 00:23:56.676 "params": { 00:23:56.676 "crdt1": 0, 00:23:56.676 "crdt2": 0, 00:23:56.676 "crdt3": 0 00:23:56.676 } 00:23:56.676 }, 00:23:56.676 { 00:23:56.676 "method": "nvmf_create_transport", 00:23:56.676 "params": { 00:23:56.676 "trtype": "TCP", 00:23:56.676 "max_queue_depth": 128, 00:23:56.676 "max_io_qpairs_per_ctrlr": 127, 00:23:56.676 "in_capsule_data_size": 4096, 00:23:56.676 "max_io_size": 131072, 00:23:56.676 "io_unit_size": 131072, 00:23:56.676 "max_aq_depth": 128, 00:23:56.676 "num_shared_buffers": 511, 00:23:56.676 "buf_cache_size": 4294967295, 00:23:56.676 "dif_insert_or_strip": false, 00:23:56.676 "zcopy": false, 00:23:56.676 "c2h_success": false, 00:23:56.676 "sock_priority": 0, 00:23:56.676 "abort_timeout_sec": 1, 00:23:56.676 "ack_timeout": 0, 00:23:56.676 "data_wr_pool_size": 0 00:23:56.676 } 00:23:56.676 }, 00:23:56.676 { 00:23:56.676 "method": "nvmf_create_subsystem", 00:23:56.676 "params": { 00:23:56.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.676 "allow_any_host": false, 00:23:56.676 "serial_number": "SPDK00000000000001", 00:23:56.676 "model_number": "SPDK bdev Controller", 00:23:56.676 "max_namespaces": 10, 00:23:56.676 "min_cntlid": 1, 00:23:56.676 "max_cntlid": 65519, 00:23:56.676 "ana_reporting": false 00:23:56.676 } 00:23:56.676 }, 00:23:56.676 { 00:23:56.676 "method": "nvmf_subsystem_add_host", 00:23:56.676 "params": { 00:23:56.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.676 "host": "nqn.2016-06.io.spdk:host1", 00:23:56.676 "psk": "/tmp/tmp.cFRcUV7Rs5" 00:23:56.676 } 00:23:56.676 }, 00:23:56.676 { 00:23:56.676 "method": "nvmf_subsystem_add_ns", 00:23:56.676 "params": { 00:23:56.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.676 "namespace": { 00:23:56.676 "nsid": 1, 00:23:56.676 "bdev_name": "malloc0", 00:23:56.676 "nguid": "2436300A03634233977905AD88240922", 00:23:56.676 "uuid": "2436300a-0363-4233-9779-05ad88240922", 00:23:56.676 "no_auto_visible": false 00:23:56.676 } 00:23:56.676 } 00:23:56.676 }, 00:23:56.676 { 00:23:56.676 "method": "nvmf_subsystem_add_listener", 00:23:56.676 "params": { 00:23:56.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.676 "listen_address": { 00:23:56.676 "trtype": "TCP", 00:23:56.676 "adrfam": "IPv4", 00:23:56.676 "traddr": "10.0.0.2", 00:23:56.676 "trsvcid": "4420" 00:23:56.676 }, 00:23:56.676 "secure_channel": true 00:23:56.676 } 00:23:56.676 } 00:23:56.676 ] 00:23:56.676 } 00:23:56.676 ] 00:23:56.676 }' 00:23:56.676 15:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:56.676 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:56.676 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.676 15:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1357518 00:23:56.676 15:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:56.676 15:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1357518 00:23:56.676 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1357518 ']' 00:23:56.677 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.677 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:56.677 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.677 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:56.677 15:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.677 [2024-05-15 15:42:09.754009] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:23:56.677 [2024-05-15 15:42:09.754081] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.934 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.934 [2024-05-15 15:42:09.797400] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:56.934 [2024-05-15 15:42:09.832361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.934 [2024-05-15 15:42:09.918163] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.934 [2024-05-15 15:42:09.918234] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.934 [2024-05-15 15:42:09.918263] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.934 [2024-05-15 15:42:09.918276] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.934 [2024-05-15 15:42:09.918289] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.934 [2024-05-15 15:42:09.918374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.193 [2024-05-15 15:42:10.148258] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.193 [2024-05-15 15:42:10.164197] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:57.193 [2024-05-15 15:42:10.180225] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:57.193 [2024-05-15 15:42:10.180301] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:57.193 [2024-05-15 15:42:10.194423] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.759 15:42:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:57.759 15:42:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:57.759 15:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:57.759 15:42:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:57.759 15:42:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.759 15:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.759 15:42:10 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1357671 00:23:57.759 15:42:10 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1357671 /var/tmp/bdevperf.sock 00:23:57.759 15:42:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1357671 ']' 00:23:57.759 15:42:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.759 15:42:10 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:57.759 15:42:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:57.759 15:42:10 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:57.759 "subsystems": [ 00:23:57.759 { 00:23:57.759 "subsystem": "keyring", 00:23:57.759 "config": [] 00:23:57.759 }, 00:23:57.759 { 00:23:57.759 "subsystem": "iobuf", 00:23:57.759 "config": [ 00:23:57.759 { 00:23:57.759 "method": "iobuf_set_options", 00:23:57.759 "params": { 00:23:57.759 "small_pool_count": 8192, 00:23:57.759 "large_pool_count": 1024, 00:23:57.759 "small_bufsize": 8192, 00:23:57.759 "large_bufsize": 135168 00:23:57.759 } 00:23:57.759 } 00:23:57.759 ] 00:23:57.759 }, 00:23:57.759 { 00:23:57.759 "subsystem": "sock", 00:23:57.759 "config": [ 00:23:57.759 { 00:23:57.759 "method": "sock_impl_set_options", 00:23:57.759 "params": { 00:23:57.759 "impl_name": "posix", 00:23:57.759 "recv_buf_size": 2097152, 00:23:57.759 "send_buf_size": 2097152, 00:23:57.759 "enable_recv_pipe": true, 00:23:57.759 "enable_quickack": false, 00:23:57.759 "enable_placement_id": 0, 00:23:57.759 "enable_zerocopy_send_server": true, 00:23:57.759 "enable_zerocopy_send_client": false, 00:23:57.759 "zerocopy_threshold": 0, 00:23:57.759 "tls_version": 0, 00:23:57.759 "enable_ktls": false 00:23:57.759 } 00:23:57.759 }, 00:23:57.759 { 00:23:57.759 "method": "sock_impl_set_options", 00:23:57.759 "params": { 00:23:57.759 "impl_name": "ssl", 00:23:57.759 "recv_buf_size": 4096, 00:23:57.759 "send_buf_size": 4096, 00:23:57.759 "enable_recv_pipe": true, 00:23:57.759 "enable_quickack": false, 00:23:57.759 "enable_placement_id": 0, 00:23:57.759 "enable_zerocopy_send_server": true, 00:23:57.759 "enable_zerocopy_send_client": false, 00:23:57.759 "zerocopy_threshold": 0, 00:23:57.759 "tls_version": 0, 00:23:57.759 "enable_ktls": false 00:23:57.759 } 00:23:57.759 } 00:23:57.759 ] 00:23:57.759 }, 00:23:57.759 { 00:23:57.759 "subsystem": "vmd", 00:23:57.759 "config": [] 00:23:57.759 }, 00:23:57.759 { 00:23:57.759 "subsystem": "accel", 00:23:57.759 "config": [ 00:23:57.759 { 00:23:57.759 "method": "accel_set_options", 00:23:57.759 "params": { 00:23:57.759 "small_cache_size": 128, 00:23:57.759 "large_cache_size": 16, 00:23:57.759 "task_count": 2048, 00:23:57.759 "sequence_count": 2048, 00:23:57.759 "buf_count": 2048 00:23:57.759 } 00:23:57.759 } 00:23:57.759 ] 00:23:57.759 }, 00:23:57.759 { 00:23:57.759 "subsystem": "bdev", 00:23:57.759 "config": [ 00:23:57.759 { 00:23:57.759 "method": "bdev_set_options", 00:23:57.759 "params": { 00:23:57.759 "bdev_io_pool_size": 65535, 00:23:57.759 "bdev_io_cache_size": 256, 00:23:57.759 "bdev_auto_examine": true, 00:23:57.759 "iobuf_small_cache_size": 128, 00:23:57.759 "iobuf_large_cache_size": 16 00:23:57.759 } 00:23:57.759 }, 00:23:57.759 { 00:23:57.759 "method": "bdev_raid_set_options", 00:23:57.759 "params": { 00:23:57.759 "process_window_size_kb": 1024 00:23:57.759 } 00:23:57.759 }, 00:23:57.759 { 00:23:57.759 "method": "bdev_iscsi_set_options", 00:23:57.759 "params": { 00:23:57.759 "timeout_sec": 30 00:23:57.759 } 00:23:57.759 }, 00:23:57.759 { 00:23:57.759 "method": "bdev_nvme_set_options", 00:23:57.759 "params": { 00:23:57.759 "action_on_timeout": "none", 00:23:57.759 "timeout_us": 0, 00:23:57.759 "timeout_admin_us": 0, 00:23:57.759 "keep_alive_timeout_ms": 10000, 00:23:57.759 "arbitration_burst": 0, 00:23:57.759 "low_priority_weight": 0, 00:23:57.759 "medium_priority_weight": 0, 00:23:57.759 "high_priority_weight": 0, 00:23:57.759 "nvme_adminq_poll_period_us": 10000, 00:23:57.759 "nvme_ioq_poll_period_us": 0, 00:23:57.759 "io_queue_requests": 512, 00:23:57.759 "delay_cmd_submit": true, 00:23:57.759 "transport_retry_count": 4, 00:23:57.759 "bdev_retry_count": 3, 00:23:57.759 "transport_ack_timeout": 0, 00:23:57.759 "ctrlr_loss_timeout_sec": 0, 00:23:57.759 "reconnect_delay_sec": 0, 00:23:57.759 "fast_io_fail_timeout_sec": 0, 00:23:57.759 "disable_auto_failback": false, 00:23:57.759 "generate_uuids": false, 00:23:57.759 "transport_tos": 0, 00:23:57.759 "nvme_error_stat": false, 00:23:57.759 "rdma_srq_size": 0, 00:23:57.759 "io_path_stat": false, 00:23:57.759 "allow_accel_sequence": false, 00:23:57.759 "rdma_max_cq_size": 0, 00:23:57.759 "rdma_cm_event_timeout_ms": 0, 00:23:57.759 "dhchap_digests": [ 00:23:57.759 "sha256", 00:23:57.759 "sha384", 00:23:57.759 "sha512" 00:23:57.759 ], 00:23:57.759 "dhchap_dhgroups": [ 00:23:57.759 "null", 00:23:57.759 "ffdhe2048", 00:23:57.759 "ffdhe3072", 00:23:57.759 "ffdhe4096", 00:23:57.759 "ffdhe6144", 00:23:57.759 "ffdhe8192" 00:23:57.759 ] 00:23:57.759 } 00:23:57.759 }, 00:23:57.759 { 00:23:57.759 "method": "bdev_nvme_attach_controller", 00:23:57.759 "params": { 00:23:57.759 "name": "TLSTEST", 00:23:57.759 "trtype": "TCP", 00:23:57.759 "adrfam": "IPv4", 00:23:57.759 "traddr": "10.0.0.2", 00:23:57.759 "trsvcid": "4420", 00:23:57.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.759 "prchk_reftag": false, 00:23:57.759 "prchk_guard": false, 00:23:57.759 "ctrlr_loss_timeout_sec": 0, 00:23:57.759 "reconnect_delay_sec": 0, 00:23:57.759 "fast_io_fail_timeout_sec": 0, 00:23:57.759 "psk": "/tmp/tmp.cFRcUV7Rs5", 00:23:57.759 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:57.759 "hdgst": false, 00:23:57.759 "ddgst": false 00:23:57.759 } 00:23:57.759 }, 00:23:57.759 { 00:23:57.759 "method": "bdev_nvme_set_hotplug", 00:23:57.759 "params": { 00:23:57.759 "period_us": 100000, 00:23:57.759 "enable": false 00:23:57.759 } 00:23:57.759 }, 00:23:57.759 { 00:23:57.759 "method": "bdev_wait_for_examine" 00:23:57.759 } 00:23:57.759 ] 00:23:57.759 }, 00:23:57.759 { 00:23:57.759 "subsystem": "nbd", 00:23:57.759 "config": [] 00:23:57.759 } 00:23:57.759 ] 00:23:57.759 }' 00:23:57.759 15:42:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.759 15:42:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:57.759 15:42:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.760 [2024-05-15 15:42:10.761540] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:23:57.760 [2024-05-15 15:42:10.761610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1357671 ] 00:23:57.760 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.760 [2024-05-15 15:42:10.797582] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:57.760 [2024-05-15 15:42:10.827022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.017 [2024-05-15 15:42:10.910589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:58.017 [2024-05-15 15:42:11.060520] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:58.017 [2024-05-15 15:42:11.060640] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:58.948 15:42:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:58.948 15:42:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:58.948 15:42:11 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:58.948 Running I/O for 10 seconds... 00:24:08.909 00:24:08.909 Latency(us) 00:24:08.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.909 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:08.909 Verification LBA range: start 0x0 length 0x2000 00:24:08.909 TLSTESTn1 : 10.02 3508.85 13.71 0.00 0.00 36406.17 5995.33 45438.29 00:24:08.909 =================================================================================================================== 00:24:08.909 Total : 3508.85 13.71 0.00 0.00 36406.17 5995.33 45438.29 00:24:08.909 0 00:24:08.909 15:42:21 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:08.909 15:42:21 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1357671 00:24:08.909 15:42:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1357671 ']' 00:24:08.909 15:42:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1357671 00:24:08.909 15:42:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:08.909 15:42:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:08.909 15:42:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1357671 00:24:08.909 15:42:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:08.909 15:42:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:08.909 15:42:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1357671' 00:24:08.909 killing process with pid 1357671 00:24:08.909 15:42:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1357671 00:24:08.909 Received shutdown signal, test time was about 10.000000 seconds 00:24:08.909 00:24:08.909 Latency(us) 00:24:08.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.909 =================================================================================================================== 00:24:08.909 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:08.909 [2024-05-15 15:42:21.901687] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:08.909 15:42:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1357671 00:24:09.167 15:42:22 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1357518 00:24:09.167 15:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1357518 ']' 00:24:09.167 15:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1357518 00:24:09.167 15:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:09.167 15:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:09.167 15:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1357518 00:24:09.167 15:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:09.167 15:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:09.167 15:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1357518' 00:24:09.167 killing process with pid 1357518 00:24:09.167 15:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1357518 00:24:09.167 [2024-05-15 15:42:22.159414] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:09.167 [2024-05-15 15:42:22.159468] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:09.167 15:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1357518 00:24:09.424 15:42:22 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:24:09.424 15:42:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:09.424 15:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:09.424 15:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.424 15:42:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1359000 00:24:09.424 15:42:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:09.424 15:42:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1359000 00:24:09.424 15:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1359000 ']' 00:24:09.424 15:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.424 15:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:09.424 15:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.424 15:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:09.424 15:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.424 [2024-05-15 15:42:22.453673] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:24:09.424 [2024-05-15 15:42:22.453764] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.424 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.424 [2024-05-15 15:42:22.496509] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:09.682 [2024-05-15 15:42:22.533646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.682 [2024-05-15 15:42:22.619413] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.682 [2024-05-15 15:42:22.619475] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.682 [2024-05-15 15:42:22.619500] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.682 [2024-05-15 15:42:22.619514] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.682 [2024-05-15 15:42:22.619526] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.682 [2024-05-15 15:42:22.619557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.682 15:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:09.682 15:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:09.682 15:42:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:09.682 15:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:09.682 15:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.682 15:42:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:09.682 15:42:22 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.cFRcUV7Rs5 00:24:09.682 15:42:22 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.cFRcUV7Rs5 00:24:09.682 15:42:22 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:09.939 [2024-05-15 15:42:23.038060] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.196 15:42:23 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:10.453 15:42:23 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:10.710 [2024-05-15 15:42:23.559436] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:10.710 [2024-05-15 15:42:23.559533] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:10.710 [2024-05-15 15:42:23.559771] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.710 15:42:23 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:10.967 malloc0 00:24:10.967 15:42:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:11.225 15:42:24 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cFRcUV7Rs5 00:24:11.484 [2024-05-15 15:42:24.340982] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:11.484 15:42:24 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1359285 00:24:11.484 15:42:24 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:11.484 15:42:24 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:11.484 15:42:24 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1359285 /var/tmp/bdevperf.sock 00:24:11.484 15:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1359285 ']' 00:24:11.484 15:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.484 15:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:11.484 15:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.484 15:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:11.484 15:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:11.484 [2024-05-15 15:42:24.403247] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:24:11.484 [2024-05-15 15:42:24.403331] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1359285 ] 00:24:11.484 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.484 [2024-05-15 15:42:24.440342] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:11.484 [2024-05-15 15:42:24.476794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.484 [2024-05-15 15:42:24.566771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.741 15:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:11.741 15:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:11.741 15:42:24 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cFRcUV7Rs5 00:24:12.007 15:42:24 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:12.308 [2024-05-15 15:42:25.149839] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:12.308 nvme0n1 00:24:12.308 15:42:25 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:12.308 Running I/O for 1 seconds... 00:24:13.678 00:24:13.678 Latency(us) 00:24:13.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.678 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:13.678 Verification LBA range: start 0x0 length 0x2000 00:24:13.678 nvme0n1 : 1.03 3175.82 12.41 0.00 0.00 39642.92 6359.42 58642.58 00:24:13.678 =================================================================================================================== 00:24:13.678 Total : 3175.82 12.41 0.00 0.00 39642.92 6359.42 58642.58 00:24:13.678 0 00:24:13.678 15:42:26 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1359285 00:24:13.678 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1359285 ']' 00:24:13.678 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1359285 00:24:13.678 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:13.678 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:13.678 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1359285 00:24:13.678 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:13.678 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:13.678 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1359285' 00:24:13.678 killing process with pid 1359285 00:24:13.678 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1359285 00:24:13.678 Received shutdown signal, test time was about 1.000000 seconds 00:24:13.678 00:24:13.678 Latency(us) 00:24:13.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.678 =================================================================================================================== 00:24:13.678 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:13.678 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1359285 00:24:13.678 15:42:26 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1359000 00:24:13.678 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1359000 ']' 00:24:13.678 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1359000 00:24:13.678 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:13.678 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:13.678 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1359000 00:24:13.678 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:13.679 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:13.679 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1359000' 00:24:13.679 killing process with pid 1359000 00:24:13.679 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1359000 00:24:13.679 [2024-05-15 15:42:26.675965] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:13.679 [2024-05-15 15:42:26.676038] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:13.679 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1359000 00:24:13.936 15:42:26 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:24:13.936 15:42:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:13.936 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:13.936 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:13.936 15:42:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1359563 00:24:13.936 15:42:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:13.936 15:42:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1359563 00:24:13.936 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1359563 ']' 00:24:13.936 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.936 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:13.936 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.936 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:13.936 15:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:13.936 [2024-05-15 15:42:26.988313] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:24:13.936 [2024-05-15 15:42:26.988389] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.936 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.936 [2024-05-15 15:42:27.032846] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:14.194 [2024-05-15 15:42:27.070535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.194 [2024-05-15 15:42:27.156250] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.194 [2024-05-15 15:42:27.156316] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.194 [2024-05-15 15:42:27.156342] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.194 [2024-05-15 15:42:27.156355] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.194 [2024-05-15 15:42:27.156367] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.194 [2024-05-15 15:42:27.156398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.194 15:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:14.194 15:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:14.194 15:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:14.194 15:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:14.194 15:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.194 15:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.194 15:42:27 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:24:14.194 15:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.194 15:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.194 [2024-05-15 15:42:27.295109] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:14.452 malloc0 00:24:14.452 [2024-05-15 15:42:27.326811] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:14.452 [2024-05-15 15:42:27.326901] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:14.452 [2024-05-15 15:42:27.327132] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.452 15:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.452 15:42:27 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1359591 00:24:14.452 15:42:27 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:14.452 15:42:27 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1359591 /var/tmp/bdevperf.sock 00:24:14.452 15:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1359591 ']' 00:24:14.452 15:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:14.452 15:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:14.452 15:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:14.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:14.452 15:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:14.452 15:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.452 [2024-05-15 15:42:27.396301] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:24:14.452 [2024-05-15 15:42:27.396362] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1359591 ] 00:24:14.452 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.452 [2024-05-15 15:42:27.433681] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:14.452 [2024-05-15 15:42:27.469341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.710 [2024-05-15 15:42:27.567150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.710 15:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:14.710 15:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:14.710 15:42:27 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cFRcUV7Rs5 00:24:14.968 15:42:27 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:15.225 [2024-05-15 15:42:28.132819] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:15.225 nvme0n1 00:24:15.225 15:42:28 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:15.225 Running I/O for 1 seconds... 00:24:16.598 00:24:16.598 Latency(us) 00:24:16.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.598 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:16.598 Verification LBA range: start 0x0 length 0x2000 00:24:16.598 nvme0n1 : 1.03 2690.63 10.51 0.00 0.00 46954.80 7573.05 69516.71 00:24:16.598 =================================================================================================================== 00:24:16.598 Total : 2690.63 10.51 0.00 0.00 46954.80 7573.05 69516.71 00:24:16.598 0 00:24:16.598 15:42:29 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:24:16.598 15:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.598 15:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.598 15:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.598 15:42:29 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:24:16.598 "subsystems": [ 00:24:16.598 { 00:24:16.598 "subsystem": "keyring", 00:24:16.598 "config": [ 00:24:16.598 { 00:24:16.598 "method": "keyring_file_add_key", 00:24:16.598 "params": { 00:24:16.598 "name": "key0", 00:24:16.598 "path": "/tmp/tmp.cFRcUV7Rs5" 00:24:16.598 } 00:24:16.598 } 00:24:16.598 ] 00:24:16.598 }, 00:24:16.598 { 00:24:16.598 "subsystem": "iobuf", 00:24:16.598 "config": [ 00:24:16.598 { 00:24:16.598 "method": "iobuf_set_options", 00:24:16.598 "params": { 00:24:16.598 "small_pool_count": 8192, 00:24:16.598 "large_pool_count": 1024, 00:24:16.598 "small_bufsize": 8192, 00:24:16.598 "large_bufsize": 135168 00:24:16.598 } 00:24:16.598 } 00:24:16.598 ] 00:24:16.598 }, 00:24:16.598 { 00:24:16.598 "subsystem": "sock", 00:24:16.598 "config": [ 00:24:16.598 { 00:24:16.598 "method": "sock_impl_set_options", 00:24:16.598 "params": { 00:24:16.598 "impl_name": "posix", 00:24:16.598 "recv_buf_size": 2097152, 00:24:16.598 "send_buf_size": 2097152, 00:24:16.598 "enable_recv_pipe": true, 00:24:16.598 "enable_quickack": false, 00:24:16.598 "enable_placement_id": 0, 00:24:16.598 "enable_zerocopy_send_server": true, 00:24:16.598 "enable_zerocopy_send_client": false, 00:24:16.598 "zerocopy_threshold": 0, 00:24:16.598 "tls_version": 0, 00:24:16.598 "enable_ktls": false 00:24:16.598 } 00:24:16.598 }, 00:24:16.598 { 00:24:16.598 "method": "sock_impl_set_options", 00:24:16.598 "params": { 00:24:16.598 "impl_name": "ssl", 00:24:16.598 "recv_buf_size": 4096, 00:24:16.598 "send_buf_size": 4096, 00:24:16.598 "enable_recv_pipe": true, 00:24:16.598 "enable_quickack": false, 00:24:16.598 "enable_placement_id": 0, 00:24:16.598 "enable_zerocopy_send_server": true, 00:24:16.598 "enable_zerocopy_send_client": false, 00:24:16.598 "zerocopy_threshold": 0, 00:24:16.598 "tls_version": 0, 00:24:16.598 "enable_ktls": false 00:24:16.598 } 00:24:16.598 } 00:24:16.598 ] 00:24:16.598 }, 00:24:16.598 { 00:24:16.598 "subsystem": "vmd", 00:24:16.598 "config": [] 00:24:16.598 }, 00:24:16.598 { 00:24:16.598 "subsystem": "accel", 00:24:16.598 "config": [ 00:24:16.598 { 00:24:16.598 "method": "accel_set_options", 00:24:16.598 "params": { 00:24:16.598 "small_cache_size": 128, 00:24:16.598 "large_cache_size": 16, 00:24:16.598 "task_count": 2048, 00:24:16.598 "sequence_count": 2048, 00:24:16.598 "buf_count": 2048 00:24:16.598 } 00:24:16.598 } 00:24:16.598 ] 00:24:16.598 }, 00:24:16.598 { 00:24:16.598 "subsystem": "bdev", 00:24:16.598 "config": [ 00:24:16.598 { 00:24:16.598 "method": "bdev_set_options", 00:24:16.598 "params": { 00:24:16.598 "bdev_io_pool_size": 65535, 00:24:16.598 "bdev_io_cache_size": 256, 00:24:16.598 "bdev_auto_examine": true, 00:24:16.598 "iobuf_small_cache_size": 128, 00:24:16.598 "iobuf_large_cache_size": 16 00:24:16.598 } 00:24:16.598 }, 00:24:16.598 { 00:24:16.598 "method": "bdev_raid_set_options", 00:24:16.598 "params": { 00:24:16.598 "process_window_size_kb": 1024 00:24:16.598 } 00:24:16.598 }, 00:24:16.598 { 00:24:16.598 "method": "bdev_iscsi_set_options", 00:24:16.598 "params": { 00:24:16.598 "timeout_sec": 30 00:24:16.598 } 00:24:16.598 }, 00:24:16.598 { 00:24:16.598 "method": "bdev_nvme_set_options", 00:24:16.598 "params": { 00:24:16.598 "action_on_timeout": "none", 00:24:16.598 "timeout_us": 0, 00:24:16.598 "timeout_admin_us": 0, 00:24:16.598 "keep_alive_timeout_ms": 10000, 00:24:16.598 "arbitration_burst": 0, 00:24:16.598 "low_priority_weight": 0, 00:24:16.598 "medium_priority_weight": 0, 00:24:16.598 "high_priority_weight": 0, 00:24:16.598 "nvme_adminq_poll_period_us": 10000, 00:24:16.598 "nvme_ioq_poll_period_us": 0, 00:24:16.598 "io_queue_requests": 0, 00:24:16.598 "delay_cmd_submit": true, 00:24:16.598 "transport_retry_count": 4, 00:24:16.598 "bdev_retry_count": 3, 00:24:16.598 "transport_ack_timeout": 0, 00:24:16.598 "ctrlr_loss_timeout_sec": 0, 00:24:16.598 "reconnect_delay_sec": 0, 00:24:16.598 "fast_io_fail_timeout_sec": 0, 00:24:16.598 "disable_auto_failback": false, 00:24:16.598 "generate_uuids": false, 00:24:16.598 "transport_tos": 0, 00:24:16.598 "nvme_error_stat": false, 00:24:16.598 "rdma_srq_size": 0, 00:24:16.598 "io_path_stat": false, 00:24:16.598 "allow_accel_sequence": false, 00:24:16.598 "rdma_max_cq_size": 0, 00:24:16.598 "rdma_cm_event_timeout_ms": 0, 00:24:16.598 "dhchap_digests": [ 00:24:16.598 "sha256", 00:24:16.598 "sha384", 00:24:16.598 "sha512" 00:24:16.598 ], 00:24:16.598 "dhchap_dhgroups": [ 00:24:16.598 "null", 00:24:16.598 "ffdhe2048", 00:24:16.598 "ffdhe3072", 00:24:16.598 "ffdhe4096", 00:24:16.598 "ffdhe6144", 00:24:16.598 "ffdhe8192" 00:24:16.598 ] 00:24:16.598 } 00:24:16.598 }, 00:24:16.598 { 00:24:16.598 "method": "bdev_nvme_set_hotplug", 00:24:16.598 "params": { 00:24:16.598 "period_us": 100000, 00:24:16.598 "enable": false 00:24:16.598 } 00:24:16.599 }, 00:24:16.599 { 00:24:16.599 "method": "bdev_malloc_create", 00:24:16.599 "params": { 00:24:16.599 "name": "malloc0", 00:24:16.599 "num_blocks": 8192, 00:24:16.599 "block_size": 4096, 00:24:16.599 "physical_block_size": 4096, 00:24:16.599 "uuid": "82777ce0-981f-4062-85a5-b27633f453d9", 00:24:16.599 "optimal_io_boundary": 0 00:24:16.599 } 00:24:16.599 }, 00:24:16.599 { 00:24:16.599 "method": "bdev_wait_for_examine" 00:24:16.599 } 00:24:16.599 ] 00:24:16.599 }, 00:24:16.599 { 00:24:16.599 "subsystem": "nbd", 00:24:16.599 "config": [] 00:24:16.599 }, 00:24:16.599 { 00:24:16.599 "subsystem": "scheduler", 00:24:16.599 "config": [ 00:24:16.599 { 00:24:16.599 "method": "framework_set_scheduler", 00:24:16.599 "params": { 00:24:16.599 "name": "static" 00:24:16.599 } 00:24:16.599 } 00:24:16.599 ] 00:24:16.599 }, 00:24:16.599 { 00:24:16.599 "subsystem": "nvmf", 00:24:16.599 "config": [ 00:24:16.599 { 00:24:16.599 "method": "nvmf_set_config", 00:24:16.599 "params": { 00:24:16.599 "discovery_filter": "match_any", 00:24:16.599 "admin_cmd_passthru": { 00:24:16.599 "identify_ctrlr": false 00:24:16.599 } 00:24:16.599 } 00:24:16.599 }, 00:24:16.599 { 00:24:16.599 "method": "nvmf_set_max_subsystems", 00:24:16.599 "params": { 00:24:16.599 "max_subsystems": 1024 00:24:16.599 } 00:24:16.599 }, 00:24:16.599 { 00:24:16.599 "method": "nvmf_set_crdt", 00:24:16.599 "params": { 00:24:16.599 "crdt1": 0, 00:24:16.599 "crdt2": 0, 00:24:16.599 "crdt3": 0 00:24:16.599 } 00:24:16.599 }, 00:24:16.599 { 00:24:16.599 "method": "nvmf_create_transport", 00:24:16.599 "params": { 00:24:16.599 "trtype": "TCP", 00:24:16.599 "max_queue_depth": 128, 00:24:16.599 "max_io_qpairs_per_ctrlr": 127, 00:24:16.599 "in_capsule_data_size": 4096, 00:24:16.599 "max_io_size": 131072, 00:24:16.599 "io_unit_size": 131072, 00:24:16.599 "max_aq_depth": 128, 00:24:16.599 "num_shared_buffers": 511, 00:24:16.599 "buf_cache_size": 4294967295, 00:24:16.599 "dif_insert_or_strip": false, 00:24:16.599 "zcopy": false, 00:24:16.599 "c2h_success": false, 00:24:16.599 "sock_priority": 0, 00:24:16.599 "abort_timeout_sec": 1, 00:24:16.599 "ack_timeout": 0, 00:24:16.599 "data_wr_pool_size": 0 00:24:16.599 } 00:24:16.599 }, 00:24:16.599 { 00:24:16.599 "method": "nvmf_create_subsystem", 00:24:16.599 "params": { 00:24:16.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.599 "allow_any_host": false, 00:24:16.599 "serial_number": "00000000000000000000", 00:24:16.599 "model_number": "SPDK bdev Controller", 00:24:16.599 "max_namespaces": 32, 00:24:16.599 "min_cntlid": 1, 00:24:16.599 "max_cntlid": 65519, 00:24:16.599 "ana_reporting": false 00:24:16.599 } 00:24:16.599 }, 00:24:16.599 { 00:24:16.599 "method": "nvmf_subsystem_add_host", 00:24:16.599 "params": { 00:24:16.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.599 "host": "nqn.2016-06.io.spdk:host1", 00:24:16.599 "psk": "key0" 00:24:16.599 } 00:24:16.599 }, 00:24:16.599 { 00:24:16.599 "method": "nvmf_subsystem_add_ns", 00:24:16.599 "params": { 00:24:16.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.599 "namespace": { 00:24:16.599 "nsid": 1, 00:24:16.599 "bdev_name": "malloc0", 00:24:16.599 "nguid": "82777CE0981F406285A5B27633F453D9", 00:24:16.599 "uuid": "82777ce0-981f-4062-85a5-b27633f453d9", 00:24:16.599 "no_auto_visible": false 00:24:16.599 } 00:24:16.599 } 00:24:16.599 }, 00:24:16.599 { 00:24:16.599 "method": "nvmf_subsystem_add_listener", 00:24:16.599 "params": { 00:24:16.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.599 "listen_address": { 00:24:16.599 "trtype": "TCP", 00:24:16.599 "adrfam": "IPv4", 00:24:16.599 "traddr": "10.0.0.2", 00:24:16.599 "trsvcid": "4420" 00:24:16.599 }, 00:24:16.599 "secure_channel": true 00:24:16.599 } 00:24:16.599 } 00:24:16.599 ] 00:24:16.599 } 00:24:16.599 ] 00:24:16.599 }' 00:24:16.599 15:42:29 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:16.858 15:42:29 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:24:16.858 "subsystems": [ 00:24:16.858 { 00:24:16.858 "subsystem": "keyring", 00:24:16.858 "config": [ 00:24:16.858 { 00:24:16.858 "method": "keyring_file_add_key", 00:24:16.858 "params": { 00:24:16.858 "name": "key0", 00:24:16.858 "path": "/tmp/tmp.cFRcUV7Rs5" 00:24:16.858 } 00:24:16.858 } 00:24:16.858 ] 00:24:16.858 }, 00:24:16.858 { 00:24:16.858 "subsystem": "iobuf", 00:24:16.858 "config": [ 00:24:16.858 { 00:24:16.858 "method": "iobuf_set_options", 00:24:16.858 "params": { 00:24:16.858 "small_pool_count": 8192, 00:24:16.858 "large_pool_count": 1024, 00:24:16.858 "small_bufsize": 8192, 00:24:16.858 "large_bufsize": 135168 00:24:16.858 } 00:24:16.858 } 00:24:16.858 ] 00:24:16.858 }, 00:24:16.858 { 00:24:16.858 "subsystem": "sock", 00:24:16.858 "config": [ 00:24:16.858 { 00:24:16.858 "method": "sock_impl_set_options", 00:24:16.858 "params": { 00:24:16.858 "impl_name": "posix", 00:24:16.858 "recv_buf_size": 2097152, 00:24:16.858 "send_buf_size": 2097152, 00:24:16.858 "enable_recv_pipe": true, 00:24:16.858 "enable_quickack": false, 00:24:16.858 "enable_placement_id": 0, 00:24:16.858 "enable_zerocopy_send_server": true, 00:24:16.858 "enable_zerocopy_send_client": false, 00:24:16.858 "zerocopy_threshold": 0, 00:24:16.858 "tls_version": 0, 00:24:16.858 "enable_ktls": false 00:24:16.858 } 00:24:16.858 }, 00:24:16.858 { 00:24:16.858 "method": "sock_impl_set_options", 00:24:16.858 "params": { 00:24:16.858 "impl_name": "ssl", 00:24:16.858 "recv_buf_size": 4096, 00:24:16.858 "send_buf_size": 4096, 00:24:16.858 "enable_recv_pipe": true, 00:24:16.858 "enable_quickack": false, 00:24:16.858 "enable_placement_id": 0, 00:24:16.858 "enable_zerocopy_send_server": true, 00:24:16.858 "enable_zerocopy_send_client": false, 00:24:16.858 "zerocopy_threshold": 0, 00:24:16.858 "tls_version": 0, 00:24:16.858 "enable_ktls": false 00:24:16.858 } 00:24:16.858 } 00:24:16.858 ] 00:24:16.858 }, 00:24:16.858 { 00:24:16.858 "subsystem": "vmd", 00:24:16.858 "config": [] 00:24:16.858 }, 00:24:16.858 { 00:24:16.858 "subsystem": "accel", 00:24:16.858 "config": [ 00:24:16.858 { 00:24:16.858 "method": "accel_set_options", 00:24:16.858 "params": { 00:24:16.858 "small_cache_size": 128, 00:24:16.858 "large_cache_size": 16, 00:24:16.858 "task_count": 2048, 00:24:16.858 "sequence_count": 2048, 00:24:16.858 "buf_count": 2048 00:24:16.858 } 00:24:16.858 } 00:24:16.858 ] 00:24:16.858 }, 00:24:16.858 { 00:24:16.858 "subsystem": "bdev", 00:24:16.858 "config": [ 00:24:16.858 { 00:24:16.858 "method": "bdev_set_options", 00:24:16.858 "params": { 00:24:16.858 "bdev_io_pool_size": 65535, 00:24:16.858 "bdev_io_cache_size": 256, 00:24:16.858 "bdev_auto_examine": true, 00:24:16.858 "iobuf_small_cache_size": 128, 00:24:16.858 "iobuf_large_cache_size": 16 00:24:16.858 } 00:24:16.858 }, 00:24:16.858 { 00:24:16.858 "method": "bdev_raid_set_options", 00:24:16.858 "params": { 00:24:16.858 "process_window_size_kb": 1024 00:24:16.858 } 00:24:16.858 }, 00:24:16.858 { 00:24:16.858 "method": "bdev_iscsi_set_options", 00:24:16.858 "params": { 00:24:16.858 "timeout_sec": 30 00:24:16.858 } 00:24:16.858 }, 00:24:16.858 { 00:24:16.858 "method": "bdev_nvme_set_options", 00:24:16.858 "params": { 00:24:16.858 "action_on_timeout": "none", 00:24:16.858 "timeout_us": 0, 00:24:16.858 "timeout_admin_us": 0, 00:24:16.858 "keep_alive_timeout_ms": 10000, 00:24:16.858 "arbitration_burst": 0, 00:24:16.858 "low_priority_weight": 0, 00:24:16.858 "medium_priority_weight": 0, 00:24:16.858 "high_priority_weight": 0, 00:24:16.858 "nvme_adminq_poll_period_us": 10000, 00:24:16.858 "nvme_ioq_poll_period_us": 0, 00:24:16.858 "io_queue_requests": 512, 00:24:16.858 "delay_cmd_submit": true, 00:24:16.858 "transport_retry_count": 4, 00:24:16.858 "bdev_retry_count": 3, 00:24:16.858 "transport_ack_timeout": 0, 00:24:16.858 "ctrlr_loss_timeout_sec": 0, 00:24:16.858 "reconnect_delay_sec": 0, 00:24:16.858 "fast_io_fail_timeout_sec": 0, 00:24:16.858 "disable_auto_failback": false, 00:24:16.858 "generate_uuids": false, 00:24:16.858 "transport_tos": 0, 00:24:16.858 "nvme_error_stat": false, 00:24:16.858 "rdma_srq_size": 0, 00:24:16.858 "io_path_stat": false, 00:24:16.858 "allow_accel_sequence": false, 00:24:16.858 "rdma_max_cq_size": 0, 00:24:16.858 "rdma_cm_event_timeout_ms": 0, 00:24:16.858 "dhchap_digests": [ 00:24:16.858 "sha256", 00:24:16.858 "sha384", 00:24:16.858 "sha512" 00:24:16.858 ], 00:24:16.858 "dhchap_dhgroups": [ 00:24:16.858 "null", 00:24:16.858 "ffdhe2048", 00:24:16.858 "ffdhe3072", 00:24:16.858 "ffdhe4096", 00:24:16.858 "ffdhe6144", 00:24:16.858 "ffdhe8192" 00:24:16.858 ] 00:24:16.858 } 00:24:16.858 }, 00:24:16.858 { 00:24:16.858 "method": "bdev_nvme_attach_controller", 00:24:16.858 "params": { 00:24:16.858 "name": "nvme0", 00:24:16.858 "trtype": "TCP", 00:24:16.858 "adrfam": "IPv4", 00:24:16.858 "traddr": "10.0.0.2", 00:24:16.858 "trsvcid": "4420", 00:24:16.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.858 "prchk_reftag": false, 00:24:16.858 "prchk_guard": false, 00:24:16.859 "ctrlr_loss_timeout_sec": 0, 00:24:16.859 "reconnect_delay_sec": 0, 00:24:16.859 "fast_io_fail_timeout_sec": 0, 00:24:16.859 "psk": "key0", 00:24:16.859 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:16.859 "hdgst": false, 00:24:16.859 "ddgst": false 00:24:16.859 } 00:24:16.859 }, 00:24:16.859 { 00:24:16.859 "method": "bdev_nvme_set_hotplug", 00:24:16.859 "params": { 00:24:16.859 "period_us": 100000, 00:24:16.859 "enable": false 00:24:16.859 } 00:24:16.859 }, 00:24:16.859 { 00:24:16.859 "method": "bdev_enable_histogram", 00:24:16.859 "params": { 00:24:16.859 "name": "nvme0n1", 00:24:16.859 "enable": true 00:24:16.859 } 00:24:16.859 }, 00:24:16.859 { 00:24:16.859 "method": "bdev_wait_for_examine" 00:24:16.859 } 00:24:16.859 ] 00:24:16.859 }, 00:24:16.859 { 00:24:16.859 "subsystem": "nbd", 00:24:16.859 "config": [] 00:24:16.859 } 00:24:16.859 ] 00:24:16.859 }' 00:24:16.859 15:42:29 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1359591 00:24:16.859 15:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1359591 ']' 00:24:16.859 15:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1359591 00:24:16.859 15:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:16.859 15:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:16.859 15:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1359591 00:24:16.859 15:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:16.859 15:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:16.859 15:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1359591' 00:24:16.859 killing process with pid 1359591 00:24:16.859 15:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1359591 00:24:16.859 Received shutdown signal, test time was about 1.000000 seconds 00:24:16.859 00:24:16.859 Latency(us) 00:24:16.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.859 =================================================================================================================== 00:24:16.859 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.859 15:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1359591 00:24:17.116 15:42:30 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1359563 00:24:17.116 15:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1359563 ']' 00:24:17.116 15:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1359563 00:24:17.116 15:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:17.116 15:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:17.116 15:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1359563 00:24:17.116 15:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:17.116 15:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:17.116 15:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1359563' 00:24:17.116 killing process with pid 1359563 00:24:17.116 15:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1359563 00:24:17.116 [2024-05-15 15:42:30.104341] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:17.116 15:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1359563 00:24:17.374 15:42:30 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:24:17.374 15:42:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:17.374 15:42:30 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:24:17.374 "subsystems": [ 00:24:17.374 { 00:24:17.374 "subsystem": "keyring", 00:24:17.374 "config": [ 00:24:17.374 { 00:24:17.374 "method": "keyring_file_add_key", 00:24:17.374 "params": { 00:24:17.374 "name": "key0", 00:24:17.374 "path": "/tmp/tmp.cFRcUV7Rs5" 00:24:17.374 } 00:24:17.374 } 00:24:17.374 ] 00:24:17.374 }, 00:24:17.374 { 00:24:17.374 "subsystem": "iobuf", 00:24:17.374 "config": [ 00:24:17.374 { 00:24:17.374 "method": "iobuf_set_options", 00:24:17.374 "params": { 00:24:17.374 "small_pool_count": 8192, 00:24:17.374 "large_pool_count": 1024, 00:24:17.374 "small_bufsize": 8192, 00:24:17.374 "large_bufsize": 135168 00:24:17.374 } 00:24:17.374 } 00:24:17.374 ] 00:24:17.374 }, 00:24:17.374 { 00:24:17.374 "subsystem": "sock", 00:24:17.374 "config": [ 00:24:17.374 { 00:24:17.374 "method": "sock_impl_set_options", 00:24:17.374 "params": { 00:24:17.374 "impl_name": "posix", 00:24:17.374 "recv_buf_size": 2097152, 00:24:17.374 "send_buf_size": 2097152, 00:24:17.374 "enable_recv_pipe": true, 00:24:17.374 "enable_quickack": false, 00:24:17.374 "enable_placement_id": 0, 00:24:17.374 "enable_zerocopy_send_server": true, 00:24:17.374 "enable_zerocopy_send_client": false, 00:24:17.374 "zerocopy_threshold": 0, 00:24:17.374 "tls_version": 0, 00:24:17.374 "enable_ktls": false 00:24:17.374 } 00:24:17.374 }, 00:24:17.374 { 00:24:17.374 "method": "sock_impl_set_options", 00:24:17.374 "params": { 00:24:17.374 "impl_name": "ssl", 00:24:17.374 "recv_buf_size": 4096, 00:24:17.374 "send_buf_size": 4096, 00:24:17.374 "enable_recv_pipe": true, 00:24:17.374 "enable_quickack": false, 00:24:17.374 "enable_placement_id": 0, 00:24:17.374 "enable_zerocopy_send_server": true, 00:24:17.374 "enable_zerocopy_send_client": false, 00:24:17.374 "zerocopy_threshold": 0, 00:24:17.374 "tls_version": 0, 00:24:17.374 "enable_ktls": false 00:24:17.374 } 00:24:17.374 } 00:24:17.374 ] 00:24:17.374 }, 00:24:17.374 { 00:24:17.374 "subsystem": "vmd", 00:24:17.374 "config": [] 00:24:17.374 }, 00:24:17.374 { 00:24:17.374 "subsystem": "accel", 00:24:17.374 "config": [ 00:24:17.374 { 00:24:17.374 "method": "accel_set_options", 00:24:17.374 "params": { 00:24:17.374 "small_cache_size": 128, 00:24:17.374 "large_cache_size": 16, 00:24:17.374 "task_count": 2048, 00:24:17.374 "sequence_count": 2048, 00:24:17.374 "buf_count": 2048 00:24:17.374 } 00:24:17.374 } 00:24:17.374 ] 00:24:17.374 }, 00:24:17.374 { 00:24:17.374 "subsystem": "bdev", 00:24:17.374 "config": [ 00:24:17.374 { 00:24:17.374 "method": "bdev_set_options", 00:24:17.374 "params": { 00:24:17.374 "bdev_io_pool_size": 65535, 00:24:17.374 "bdev_io_cache_size": 256, 00:24:17.374 "bdev_auto_examine": true, 00:24:17.374 "iobuf_small_cache_size": 128, 00:24:17.374 "iobuf_large_cache_size": 16 00:24:17.374 } 00:24:17.374 }, 00:24:17.374 { 00:24:17.374 "method": "bdev_raid_set_options", 00:24:17.374 "params": { 00:24:17.374 "process_window_size_kb": 1024 00:24:17.374 } 00:24:17.374 }, 00:24:17.374 { 00:24:17.374 "method": "bdev_iscsi_set_options", 00:24:17.374 "params": { 00:24:17.374 "timeout_sec": 30 00:24:17.374 } 00:24:17.374 }, 00:24:17.374 { 00:24:17.374 "method": "bdev_nvme_set_options", 00:24:17.374 "params": { 00:24:17.374 "action_on_timeout": "none", 00:24:17.374 "timeout_us": 0, 00:24:17.374 "timeout_admin_us": 0, 00:24:17.374 "keep_alive_timeout_ms": 10000, 00:24:17.374 "arbitration_burst": 0, 00:24:17.374 "low_priority_weight": 0, 00:24:17.374 "medium_priority_weight": 0, 00:24:17.374 "high_priority_weight": 0, 00:24:17.374 "nvme_adminq_poll_period_us": 10000, 00:24:17.374 "nvme_ioq_poll_period_us": 0, 00:24:17.374 "io_queue_requests": 0, 00:24:17.374 "delay_cmd_submit": true, 00:24:17.374 "transport_retry_count": 4, 00:24:17.374 "bdev_retry_count": 3, 00:24:17.374 "transport_ack_timeout": 0, 00:24:17.374 "ctrlr_loss_timeout_sec": 0, 00:24:17.374 "reconnect_delay_sec": 0, 00:24:17.374 "fast_io_fail_timeout_sec": 0, 00:24:17.374 "disable_auto_failback": false, 00:24:17.374 "generate_uuids": false, 00:24:17.374 "transport_tos": 0, 00:24:17.374 "nvme_error_stat": false, 00:24:17.374 "rdma_srq_size": 0, 00:24:17.374 "io_path_stat": false, 00:24:17.374 "allow_accel_sequence": false, 00:24:17.374 "rdma_max_cq_size": 0, 00:24:17.374 "rdma_cm_event_timeout_ms": 0, 00:24:17.374 "dhchap_digests": [ 00:24:17.374 "sha256", 00:24:17.374 "sha384", 00:24:17.374 "sha512" 00:24:17.374 ], 00:24:17.374 "dhchap_dhgroups": [ 00:24:17.374 "null", 00:24:17.374 "ffdhe2048", 00:24:17.374 "ffdhe3072", 00:24:17.374 "ffdhe4096", 00:24:17.374 "ffdhe6144", 00:24:17.374 "ffdhe8192" 00:24:17.374 ] 00:24:17.374 } 00:24:17.374 }, 00:24:17.374 { 00:24:17.374 "method": "bdev_nvme_set_hotplug", 00:24:17.374 "params": { 00:24:17.374 "period_us": 100000, 00:24:17.374 "enable": false 00:24:17.374 } 00:24:17.374 }, 00:24:17.374 { 00:24:17.374 "method": "bdev_malloc_create", 00:24:17.374 "params": { 00:24:17.374 "name": "malloc0", 00:24:17.374 "num_blocks": 8192, 00:24:17.374 "block_size": 4096, 00:24:17.374 "physical_block_size": 4096, 00:24:17.374 "uuid": "82777ce0-981f-4062-85a5-b27633f453d9", 00:24:17.374 "optimal_io_boundary": 0 00:24:17.374 } 00:24:17.374 }, 00:24:17.374 { 00:24:17.374 "method": "bdev_wait_for_examine" 00:24:17.374 } 00:24:17.374 ] 00:24:17.374 }, 00:24:17.374 { 00:24:17.374 "subsystem": "nbd", 00:24:17.374 "config": [] 00:24:17.374 }, 00:24:17.374 { 00:24:17.374 "subsystem": "scheduler", 00:24:17.374 "config": [ 00:24:17.374 { 00:24:17.374 "method": "framework_set_scheduler", 00:24:17.374 "params": { 00:24:17.374 "name": "static" 00:24:17.374 } 00:24:17.374 } 00:24:17.374 ] 00:24:17.374 }, 00:24:17.374 { 00:24:17.374 "subsystem": "nvmf", 00:24:17.374 "config": [ 00:24:17.374 { 00:24:17.374 "method": "nvmf_set_config", 00:24:17.374 "params": { 00:24:17.374 "discovery_filter": "match_any", 00:24:17.374 "admin_cmd_passthru": { 00:24:17.374 "identify_ctrlr": false 00:24:17.374 } 00:24:17.374 } 00:24:17.374 }, 00:24:17.374 { 00:24:17.374 "method": "nvmf_set_max_subsystems", 00:24:17.374 "params": { 00:24:17.374 "max_subsystems": 1024 00:24:17.374 } 00:24:17.374 }, 00:24:17.374 { 00:24:17.374 "method": "nvmf_set_crdt", 00:24:17.374 "params": { 00:24:17.374 "crdt1": 0, 00:24:17.374 "crdt2": 0, 00:24:17.374 "crdt3": 0 00:24:17.374 } 00:24:17.374 }, 00:24:17.374 { 00:24:17.374 "method": "nvmf_create_transport", 00:24:17.374 "params": { 00:24:17.374 "trtype": "TCP", 00:24:17.374 "max_queue_depth": 128, 00:24:17.374 "max_io_qpairs_per_ctrlr": 127, 00:24:17.374 "in_capsule_data_size": 4096, 00:24:17.374 "max_io_size": 131072, 00:24:17.374 "io_unit_size": 131072, 00:24:17.374 "max_aq_depth": 128, 00:24:17.374 "num_shared_buffers": 511, 00:24:17.374 "buf_cache_size": 4294967295, 00:24:17.374 "dif_insert_or_strip": false, 00:24:17.374 "zcopy": false, 00:24:17.374 "c2h_success": false, 00:24:17.374 "sock_priority": 0, 00:24:17.374 "abort_timeout_sec": 1, 00:24:17.374 "ack_timeout": 0, 00:24:17.374 "data_wr_pool_size": 0 00:24:17.374 } 00:24:17.374 }, 00:24:17.374 { 00:24:17.374 "method": "nvmf_create_subsystem", 00:24:17.374 "params": { 00:24:17.374 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.374 "allow_any_host": false, 00:24:17.374 "serial_number": "00000000000000000000", 00:24:17.374 "model_number": "SPDK bdev Controller", 00:24:17.374 "max_namespaces": 32, 00:24:17.374 "min_cntlid": 1, 00:24:17.374 "max_cntlid": 65519, 00:24:17.374 "ana_reporting": false 00:24:17.375 } 00:24:17.375 }, 00:24:17.375 { 00:24:17.375 "method": "nvmf_subsystem_add_host", 00:24:17.375 "params": { 00:24:17.375 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.375 "host": "nqn.2016-06.io.spdk:host1", 00:24:17.375 "psk": "key0" 00:24:17.375 } 00:24:17.375 }, 00:24:17.375 { 00:24:17.375 "method": "nvmf_subsystem_add_ns", 00:24:17.375 "params": { 00:24:17.375 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.375 "namespace": { 00:24:17.375 "nsid": 1, 00:24:17.375 "bdev_name": "malloc0", 00:24:17.375 "nguid": "82777CE0981F406285A5B27633F453D9", 00:24:17.375 "uuid": "82777ce0-981f-4062-85a5-b27633f453d9", 00:24:17.375 "no_auto_visible": false 00:24:17.375 } 00:24:17.375 } 00:24:17.375 }, 00:24:17.375 { 00:24:17.375 "method": "nvmf_subsystem_add_listener", 00:24:17.375 "params": { 00:24:17.375 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.375 "listen_address": { 00:24:17.375 "trtype": "TCP", 00:24:17.375 "adrfam": "IPv4", 00:24:17.375 "traddr": "10.0.0.2", 00:24:17.375 "trsvcid": "4420" 00:24:17.375 }, 00:24:17.375 "secure_channel": true 00:24:17.375 } 00:24:17.375 } 00:24:17.375 ] 00:24:17.375 } 00:24:17.375 ] 00:24:17.375 }' 00:24:17.375 15:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:17.375 15:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.375 15:42:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1359997 00:24:17.375 15:42:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:17.375 15:42:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1359997 00:24:17.375 15:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1359997 ']' 00:24:17.375 15:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.375 15:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:17.375 15:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.375 15:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:17.375 15:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.375 [2024-05-15 15:42:30.411681] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:24:17.375 [2024-05-15 15:42:30.411777] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.375 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.375 [2024-05-15 15:42:30.456418] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:17.632 [2024-05-15 15:42:30.494320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.632 [2024-05-15 15:42:30.579972] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.632 [2024-05-15 15:42:30.580035] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.632 [2024-05-15 15:42:30.580061] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.632 [2024-05-15 15:42:30.580075] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.632 [2024-05-15 15:42:30.580087] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.632 [2024-05-15 15:42:30.580173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.890 [2024-05-15 15:42:30.808408] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.890 [2024-05-15 15:42:30.840384] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:17.890 [2024-05-15 15:42:30.840462] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:17.890 [2024-05-15 15:42:30.848418] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.455 15:42:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:18.455 15:42:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:18.455 15:42:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:18.455 15:42:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:18.455 15:42:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.455 15:42:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.455 15:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1360150 00:24:18.455 15:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1360150 /var/tmp/bdevperf.sock 00:24:18.455 15:42:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1360150 ']' 00:24:18.455 15:42:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:18.455 15:42:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:18.455 15:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:18.455 15:42:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:18.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:18.455 15:42:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:18.455 15:42:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.455 15:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:24:18.455 "subsystems": [ 00:24:18.455 { 00:24:18.455 "subsystem": "keyring", 00:24:18.455 "config": [ 00:24:18.455 { 00:24:18.455 "method": "keyring_file_add_key", 00:24:18.455 "params": { 00:24:18.455 "name": "key0", 00:24:18.455 "path": "/tmp/tmp.cFRcUV7Rs5" 00:24:18.455 } 00:24:18.455 } 00:24:18.455 ] 00:24:18.455 }, 00:24:18.455 { 00:24:18.455 "subsystem": "iobuf", 00:24:18.455 "config": [ 00:24:18.455 { 00:24:18.455 "method": "iobuf_set_options", 00:24:18.455 "params": { 00:24:18.455 "small_pool_count": 8192, 00:24:18.455 "large_pool_count": 1024, 00:24:18.455 "small_bufsize": 8192, 00:24:18.455 "large_bufsize": 135168 00:24:18.455 } 00:24:18.455 } 00:24:18.455 ] 00:24:18.455 }, 00:24:18.455 { 00:24:18.455 "subsystem": "sock", 00:24:18.455 "config": [ 00:24:18.455 { 00:24:18.455 "method": "sock_impl_set_options", 00:24:18.455 "params": { 00:24:18.455 "impl_name": "posix", 00:24:18.455 "recv_buf_size": 2097152, 00:24:18.455 "send_buf_size": 2097152, 00:24:18.455 "enable_recv_pipe": true, 00:24:18.455 "enable_quickack": false, 00:24:18.456 "enable_placement_id": 0, 00:24:18.456 "enable_zerocopy_send_server": true, 00:24:18.456 "enable_zerocopy_send_client": false, 00:24:18.456 "zerocopy_threshold": 0, 00:24:18.456 "tls_version": 0, 00:24:18.456 "enable_ktls": false 00:24:18.456 } 00:24:18.456 }, 00:24:18.456 { 00:24:18.456 "method": "sock_impl_set_options", 00:24:18.456 "params": { 00:24:18.456 "impl_name": "ssl", 00:24:18.456 "recv_buf_size": 4096, 00:24:18.456 "send_buf_size": 4096, 00:24:18.456 "enable_recv_pipe": true, 00:24:18.456 "enable_quickack": false, 00:24:18.456 "enable_placement_id": 0, 00:24:18.456 "enable_zerocopy_send_server": true, 00:24:18.456 "enable_zerocopy_send_client": false, 00:24:18.456 "zerocopy_threshold": 0, 00:24:18.456 "tls_version": 0, 00:24:18.456 "enable_ktls": false 00:24:18.456 } 00:24:18.456 } 00:24:18.456 ] 00:24:18.456 }, 00:24:18.456 { 00:24:18.456 "subsystem": "vmd", 00:24:18.456 "config": [] 00:24:18.456 }, 00:24:18.456 { 00:24:18.456 "subsystem": "accel", 00:24:18.456 "config": [ 00:24:18.456 { 00:24:18.456 "method": "accel_set_options", 00:24:18.456 "params": { 00:24:18.456 "small_cache_size": 128, 00:24:18.456 "large_cache_size": 16, 00:24:18.456 "task_count": 2048, 00:24:18.456 "sequence_count": 2048, 00:24:18.456 "buf_count": 2048 00:24:18.456 } 00:24:18.456 } 00:24:18.456 ] 00:24:18.456 }, 00:24:18.456 { 00:24:18.456 "subsystem": "bdev", 00:24:18.456 "config": [ 00:24:18.456 { 00:24:18.456 "method": "bdev_set_options", 00:24:18.456 "params": { 00:24:18.456 "bdev_io_pool_size": 65535, 00:24:18.456 "bdev_io_cache_size": 256, 00:24:18.456 "bdev_auto_examine": true, 00:24:18.456 "iobuf_small_cache_size": 128, 00:24:18.456 "iobuf_large_cache_size": 16 00:24:18.456 } 00:24:18.456 }, 00:24:18.456 { 00:24:18.456 "method": "bdev_raid_set_options", 00:24:18.456 "params": { 00:24:18.456 "process_window_size_kb": 1024 00:24:18.456 } 00:24:18.456 }, 00:24:18.456 { 00:24:18.456 "method": "bdev_iscsi_set_options", 00:24:18.456 "params": { 00:24:18.456 "timeout_sec": 30 00:24:18.456 } 00:24:18.456 }, 00:24:18.456 { 00:24:18.456 "method": "bdev_nvme_set_options", 00:24:18.456 "params": { 00:24:18.456 "action_on_timeout": "none", 00:24:18.456 "timeout_us": 0, 00:24:18.456 "timeout_admin_us": 0, 00:24:18.456 "keep_alive_timeout_ms": 10000, 00:24:18.456 "arbitration_burst": 0, 00:24:18.456 "low_priority_weight": 0, 00:24:18.456 "medium_priority_weight": 0, 00:24:18.456 "high_priority_weight": 0, 00:24:18.456 "nvme_adminq_poll_period_us": 10000, 00:24:18.456 "nvme_ioq_poll_period_us": 0, 00:24:18.456 "io_queue_requests": 512, 00:24:18.456 "delay_cmd_submit": true, 00:24:18.456 "transport_retry_count": 4, 00:24:18.456 "bdev_retry_count": 3, 00:24:18.456 "transport_ack_timeout": 0, 00:24:18.456 "ctrlr_loss_timeout_sec": 0, 00:24:18.456 "reconnect_delay_sec": 0, 00:24:18.456 "fast_io_fail_timeout_sec": 0, 00:24:18.456 "disable_auto_failback": false, 00:24:18.456 "generate_uuids": false, 00:24:18.456 "transport_tos": 0, 00:24:18.456 "nvme_error_stat": false, 00:24:18.456 "rdma_srq_size": 0, 00:24:18.456 "io_path_stat": false, 00:24:18.456 "allow_accel_sequence": false, 00:24:18.456 "rdma_max_cq_size": 0, 00:24:18.456 "rdma_cm_event_timeout_ms": 0, 00:24:18.456 "dhchap_digests": [ 00:24:18.456 "sha256", 00:24:18.456 "sha384", 00:24:18.456 "sha512" 00:24:18.456 ], 00:24:18.456 "dhchap_dhgroups": [ 00:24:18.456 "null", 00:24:18.456 "ffdhe2048", 00:24:18.456 "ffdhe3072", 00:24:18.456 "ffdhe4096", 00:24:18.456 "ffdhe6144", 00:24:18.456 "ffdhe8192" 00:24:18.456 ] 00:24:18.456 } 00:24:18.456 }, 00:24:18.456 { 00:24:18.456 "method": "bdev_nvme_attach_controller", 00:24:18.456 "params": { 00:24:18.456 "name": "nvme0", 00:24:18.456 "trtype": "TCP", 00:24:18.456 "adrfam": "IPv4", 00:24:18.456 "traddr": "10.0.0.2", 00:24:18.456 "trsvcid": "4420", 00:24:18.456 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.456 "prchk_reftag": false, 00:24:18.456 "prchk_guard": false, 00:24:18.456 "ctrlr_loss_timeout_sec": 0, 00:24:18.456 "reconnect_delay_sec": 0, 00:24:18.456 "fast_io_fail_timeout_sec": 0, 00:24:18.456 "psk": "key0", 00:24:18.456 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:18.456 "hdgst": false, 00:24:18.456 "ddgst": false 00:24:18.456 } 00:24:18.456 }, 00:24:18.456 { 00:24:18.456 "method": "bdev_nvme_set_hotplug", 00:24:18.456 "params": { 00:24:18.456 "period_us": 100000, 00:24:18.456 "enable": false 00:24:18.456 } 00:24:18.456 }, 00:24:18.456 { 00:24:18.456 "method": "bdev_enable_histogram", 00:24:18.456 "params": { 00:24:18.456 "name": "nvme0n1", 00:24:18.456 "enable": true 00:24:18.456 } 00:24:18.456 }, 00:24:18.456 { 00:24:18.456 "method": "bdev_wait_for_examine" 00:24:18.456 } 00:24:18.456 ] 00:24:18.456 }, 00:24:18.456 { 00:24:18.456 "subsystem": "nbd", 00:24:18.456 "config": [] 00:24:18.456 } 00:24:18.456 ] 00:24:18.456 }' 00:24:18.456 [2024-05-15 15:42:31.406045] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:24:18.456 [2024-05-15 15:42:31.406133] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1360150 ] 00:24:18.456 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.456 [2024-05-15 15:42:31.442032] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:18.456 [2024-05-15 15:42:31.474425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.714 [2024-05-15 15:42:31.564185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.715 [2024-05-15 15:42:31.739424] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:19.646 15:42:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:19.646 15:42:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:19.646 15:42:32 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:19.646 15:42:32 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:24:19.646 15:42:32 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.646 15:42:32 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:19.646 Running I/O for 1 seconds... 00:24:21.018 00:24:21.018 Latency(us) 00:24:21.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.018 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:21.018 Verification LBA range: start 0x0 length 0x2000 00:24:21.018 nvme0n1 : 1.03 3359.23 13.12 0.00 0.00 37635.17 10631.40 50486.99 00:24:21.018 =================================================================================================================== 00:24:21.018 Total : 3359.23 13.12 0.00 0.00 37635.17 10631.40 50486.99 00:24:21.018 0 00:24:21.018 15:42:33 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:24:21.018 15:42:33 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:24:21.018 15:42:33 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:21.018 15:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:24:21.018 15:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:24:21.018 15:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:24:21.018 15:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:21.018 15:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:24:21.018 15:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:24:21.018 15:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:24:21.018 15:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:21.018 nvmf_trace.0 00:24:21.018 15:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:24:21.018 15:42:33 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1360150 00:24:21.018 15:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1360150 ']' 00:24:21.018 15:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1360150 00:24:21.018 15:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:21.018 15:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:21.018 15:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1360150 00:24:21.018 15:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:21.018 15:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:21.018 15:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1360150' 00:24:21.018 killing process with pid 1360150 00:24:21.018 15:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1360150 00:24:21.018 Received shutdown signal, test time was about 1.000000 seconds 00:24:21.018 00:24:21.018 Latency(us) 00:24:21.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.018 =================================================================================================================== 00:24:21.018 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:21.018 15:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1360150 00:24:21.018 15:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:21.018 15:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:21.018 15:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:24:21.018 15:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:21.018 15:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:24:21.018 15:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:21.018 15:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:21.018 rmmod nvme_tcp 00:24:21.276 rmmod nvme_fabrics 00:24:21.276 rmmod nvme_keyring 00:24:21.276 15:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:21.276 15:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:24:21.276 15:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:24:21.276 15:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1359997 ']' 00:24:21.276 15:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1359997 00:24:21.276 15:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1359997 ']' 00:24:21.276 15:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1359997 00:24:21.276 15:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:21.276 15:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:21.276 15:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1359997 00:24:21.276 15:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:21.276 15:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:21.276 15:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1359997' 00:24:21.276 killing process with pid 1359997 00:24:21.276 15:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1359997 00:24:21.276 [2024-05-15 15:42:34.190379] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:21.276 15:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1359997 00:24:21.533 15:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:21.533 15:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:21.533 15:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:21.533 15:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:21.533 15:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:21.533 15:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.533 15:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:21.533 15:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.432 15:42:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:23.432 15:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.SbD501IbQF /tmp/tmp.I2l6QHZLnU /tmp/tmp.cFRcUV7Rs5 00:24:23.432 00:24:23.432 real 1m19.493s 00:24:23.432 user 2m9.202s 00:24:23.432 sys 0m25.162s 00:24:23.432 15:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:23.432 15:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.432 ************************************ 00:24:23.432 END TEST nvmf_tls 00:24:23.432 ************************************ 00:24:23.432 15:42:36 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:23.432 15:42:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:23.432 15:42:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:23.432 15:42:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:23.432 ************************************ 00:24:23.432 START TEST nvmf_fips 00:24:23.432 ************************************ 00:24:23.432 15:42:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:23.690 * Looking for test storage... 00:24:23.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:23.690 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:23.690 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:23.690 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:23.690 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:23.690 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:23.690 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:23.690 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:23.690 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:23.690 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:23.690 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:23.690 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:23.690 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:23.690 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:23.690 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:23.690 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:23.690 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:23.690 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:23.690 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:23.690 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:23.690 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:23.690 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:23.690 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:23.690 15:42:36 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.690 15:42:36 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:24:23.691 Error setting digest 00:24:23.691 0082C42C6A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:24:23.691 0082C42C6A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:24:23.691 15:42:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:26.218 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:26.218 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:26.218 Found net devices under 0000:09:00.0: cvl_0_0 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:26.218 Found net devices under 0000:09:00.1: cvl_0_1 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:24:26.218 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:26.219 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:26.219 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:26.219 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:26.219 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:26.219 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:26.219 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:26.219 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:26.219 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:26.219 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:26.219 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:26.219 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:26.219 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:26.219 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:26.219 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:26.219 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:26.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:26.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:24:26.477 00:24:26.477 --- 10.0.0.2 ping statistics --- 00:24:26.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.477 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:26.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:26.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:24:26.477 00:24:26.477 --- 10.0.0.1 ping statistics --- 00:24:26.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.477 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1362800 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1362800 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 1362800 ']' 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:26.477 15:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:26.477 [2024-05-15 15:42:39.503852] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:24:26.477 [2024-05-15 15:42:39.503949] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.477 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.477 [2024-05-15 15:42:39.547441] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:26.734 [2024-05-15 15:42:39.584767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.734 [2024-05-15 15:42:39.671143] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.734 [2024-05-15 15:42:39.671228] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.734 [2024-05-15 15:42:39.671246] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.734 [2024-05-15 15:42:39.671260] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.734 [2024-05-15 15:42:39.671282] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.734 [2024-05-15 15:42:39.671319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.664 15:42:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:27.664 15:42:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:24:27.665 15:42:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:27.665 15:42:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:27.665 15:42:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:27.665 15:42:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.665 15:42:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:24:27.665 15:42:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:27.665 15:42:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:27.665 15:42:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:27.665 15:42:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:27.665 15:42:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:27.665 15:42:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:27.665 15:42:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:27.665 [2024-05-15 15:42:40.672625] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.665 [2024-05-15 15:42:40.688588] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:27.665 [2024-05-15 15:42:40.688654] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:27.665 [2024-05-15 15:42:40.688921] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.665 [2024-05-15 15:42:40.721192] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:27.665 malloc0 00:24:27.665 15:42:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:27.665 15:42:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1362956 00:24:27.665 15:42:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:27.665 15:42:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1362956 /var/tmp/bdevperf.sock 00:24:27.665 15:42:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 1362956 ']' 00:24:27.665 15:42:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:27.665 15:42:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:27.665 15:42:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:27.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:27.665 15:42:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:27.665 15:42:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:27.921 [2024-05-15 15:42:40.811710] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:24:27.922 [2024-05-15 15:42:40.811805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1362956 ] 00:24:27.922 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.922 [2024-05-15 15:42:40.846984] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:27.922 [2024-05-15 15:42:40.878145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.922 [2024-05-15 15:42:40.960731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.877 15:42:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:28.877 15:42:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:24:28.877 15:42:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:28.877 [2024-05-15 15:42:41.927723] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:28.877 [2024-05-15 15:42:41.927862] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:29.134 TLSTESTn1 00:24:29.134 15:42:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:29.134 Running I/O for 10 seconds... 00:24:39.098 00:24:39.098 Latency(us) 00:24:39.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.098 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:39.098 Verification LBA range: start 0x0 length 0x2000 00:24:39.098 TLSTESTn1 : 10.03 2776.74 10.85 0.00 0.00 46004.05 12379.02 69905.07 00:24:39.098 =================================================================================================================== 00:24:39.098 Total : 2776.74 10.85 0.00 0.00 46004.05 12379.02 69905.07 00:24:39.098 0 00:24:39.098 15:42:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:39.098 15:42:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:39.098 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:24:39.098 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:24:39.098 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:24:39.355 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:39.355 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:24:39.355 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:24:39.355 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:24:39.355 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:39.355 nvmf_trace.0 00:24:39.355 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:24:39.355 15:42:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1362956 00:24:39.355 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 1362956 ']' 00:24:39.355 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 1362956 00:24:39.355 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:24:39.355 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:39.355 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1362956 00:24:39.355 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:39.355 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:39.355 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1362956' 00:24:39.355 killing process with pid 1362956 00:24:39.355 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 1362956 00:24:39.355 Received shutdown signal, test time was about 10.000000 seconds 00:24:39.355 00:24:39.355 Latency(us) 00:24:39.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.355 =================================================================================================================== 00:24:39.355 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:39.355 [2024-05-15 15:42:52.303811] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:39.355 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 1362956 00:24:39.612 15:42:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:39.612 15:42:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:39.612 15:42:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:39.612 15:42:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:39.612 15:42:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:39.612 15:42:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:39.612 15:42:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:39.612 rmmod nvme_tcp 00:24:39.612 rmmod nvme_fabrics 00:24:39.612 rmmod nvme_keyring 00:24:39.612 15:42:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:39.612 15:42:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:39.612 15:42:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:39.612 15:42:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1362800 ']' 00:24:39.612 15:42:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1362800 00:24:39.612 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 1362800 ']' 00:24:39.612 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 1362800 00:24:39.612 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:24:39.612 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:39.612 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1362800 00:24:39.612 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:39.612 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:39.612 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1362800' 00:24:39.612 killing process with pid 1362800 00:24:39.612 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 1362800 00:24:39.612 [2024-05-15 15:42:52.633384] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:39.612 [2024-05-15 15:42:52.633438] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:39.612 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 1362800 00:24:39.870 15:42:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:39.870 15:42:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:39.870 15:42:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:39.870 15:42:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:39.870 15:42:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:39.870 15:42:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.870 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:39.870 15:42:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.398 15:42:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:42.398 15:42:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:42.398 00:24:42.398 real 0m18.395s 00:24:42.398 user 0m23.874s 00:24:42.398 sys 0m5.903s 00:24:42.398 15:42:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:42.398 15:42:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:42.398 ************************************ 00:24:42.398 END TEST nvmf_fips 00:24:42.398 ************************************ 00:24:42.398 15:42:54 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:24:42.398 15:42:54 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:42.398 15:42:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:42.398 15:42:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:42.398 15:42:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:42.398 ************************************ 00:24:42.398 START TEST nvmf_fuzz 00:24:42.398 ************************************ 00:24:42.398 15:42:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:42.398 * Looking for test storage... 00:24:42.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:42.398 15:42:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:42.398 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:42.398 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.398 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.398 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.398 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.398 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.398 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:42.399 15:42:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:44.298 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:44.298 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:44.298 Found net devices under 0000:09:00.0: cvl_0_0 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:44.298 Found net devices under 0000:09:00.1: cvl_0_1 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:44.298 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:44.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:44.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:24:44.558 00:24:44.558 --- 10.0.0.2 ping statistics --- 00:24:44.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.558 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:44.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:24:44.558 00:24:44.558 --- 10.0.0.1 ping statistics --- 00:24:44.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.558 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1366619 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1366619 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 1366619 ']' 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:44.558 15:42:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:44.816 15:42:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:44.816 15:42:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:24:44.816 15:42:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:44.816 15:42:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.816 15:42:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:44.816 15:42:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.816 15:42:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:44.816 15:42:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.816 15:42:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:44.816 Malloc0 00:24:44.816 15:42:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.816 15:42:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:44.816 15:42:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.816 15:42:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:44.816 15:42:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.816 15:42:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:44.816 15:42:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.816 15:42:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:44.816 15:42:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.816 15:42:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:44.816 15:42:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.816 15:42:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:44.816 15:42:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.816 15:42:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:44.816 15:42:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:16.868 Fuzzing completed. Shutting down the fuzz application 00:25:16.868 00:25:16.868 Dumping successful admin opcodes: 00:25:16.868 8, 9, 10, 24, 00:25:16.868 Dumping successful io opcodes: 00:25:16.868 0, 9, 00:25:16.868 NS: 0x200003aeff00 I/O qp, Total commands completed: 437973, total successful commands: 2559, random_seed: 3688502656 00:25:16.868 NS: 0x200003aeff00 admin qp, Total commands completed: 54976, total successful commands: 440, random_seed: 3730883968 00:25:16.868 15:43:28 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:16.868 Fuzzing completed. Shutting down the fuzz application 00:25:16.868 00:25:16.868 Dumping successful admin opcodes: 00:25:16.868 24, 00:25:16.868 Dumping successful io opcodes: 00:25:16.868 00:25:16.868 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1581986873 00:25:16.868 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1582094918 00:25:16.868 15:43:29 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:16.868 15:43:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.868 15:43:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:16.868 15:43:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.868 15:43:29 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:16.868 15:43:29 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:16.868 15:43:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:16.868 15:43:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:25:16.868 15:43:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:16.868 15:43:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:25:16.868 15:43:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:16.868 15:43:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:16.868 rmmod nvme_tcp 00:25:16.868 rmmod nvme_fabrics 00:25:16.868 rmmod nvme_keyring 00:25:16.868 15:43:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:16.868 15:43:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:25:16.868 15:43:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:25:16.868 15:43:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1366619 ']' 00:25:16.868 15:43:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1366619 00:25:16.868 15:43:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 1366619 ']' 00:25:16.868 15:43:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 1366619 00:25:16.868 15:43:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:25:16.868 15:43:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:16.868 15:43:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1366619 00:25:16.868 15:43:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:16.868 15:43:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:16.869 15:43:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1366619' 00:25:16.869 killing process with pid 1366619 00:25:16.869 15:43:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 1366619 00:25:16.869 15:43:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 1366619 00:25:17.126 15:43:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:17.126 15:43:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:17.126 15:43:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:17.126 15:43:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:17.126 15:43:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:17.126 15:43:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.126 15:43:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:17.126 15:43:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.064 15:43:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:19.064 15:43:32 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:19.064 00:25:19.064 real 0m37.119s 00:25:19.064 user 0m50.353s 00:25:19.064 sys 0m15.670s 00:25:19.064 15:43:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:19.064 15:43:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:19.064 ************************************ 00:25:19.064 END TEST nvmf_fuzz 00:25:19.064 ************************************ 00:25:19.064 15:43:32 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:19.064 15:43:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:19.064 15:43:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:19.064 15:43:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:19.064 ************************************ 00:25:19.064 START TEST nvmf_multiconnection 00:25:19.064 ************************************ 00:25:19.064 15:43:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:19.322 * Looking for test storage... 00:25:19.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:19.322 15:43:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:19.322 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:19.322 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.322 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.322 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.322 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.322 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.322 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.322 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.322 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.322 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.322 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.322 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:19.322 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:19.322 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:25:19.323 15:43:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:21.849 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:21.849 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:21.849 Found net devices under 0000:09:00.0: cvl_0_0 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:21.849 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:21.850 Found net devices under 0000:09:00.1: cvl_0_1 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:21.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:25:21.850 00:25:21.850 --- 10.0.0.2 ping statistics --- 00:25:21.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.850 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:21.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:25:21.850 00:25:21.850 --- 10.0.0.1 ping statistics --- 00:25:21.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.850 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1372533 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1372533 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 1372533 ']' 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:21.850 15:43:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.850 [2024-05-15 15:43:34.891971] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:25:21.850 [2024-05-15 15:43:34.892062] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.850 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.850 [2024-05-15 15:43:34.935990] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:22.108 [2024-05-15 15:43:34.974464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:22.108 [2024-05-15 15:43:35.068627] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.108 [2024-05-15 15:43:35.068686] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.108 [2024-05-15 15:43:35.068712] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:22.108 [2024-05-15 15:43:35.068725] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:22.108 [2024-05-15 15:43:35.068737] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.108 [2024-05-15 15:43:35.068952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.108 [2024-05-15 15:43:35.069006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:22.108 [2024-05-15 15:43:35.069142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:22.108 [2024-05-15 15:43:35.069144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.108 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:22.108 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:25:22.108 15:43:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:22.108 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:22.108 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.367 [2024-05-15 15:43:35.221948] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.367 Malloc1 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.367 [2024-05-15 15:43:35.279213] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:22.367 [2024-05-15 15:43:35.279560] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.367 Malloc2 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.367 Malloc3 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.367 Malloc4 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.367 Malloc5 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.367 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:22.368 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.368 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.626 Malloc6 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.626 Malloc7 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.626 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.627 Malloc8 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.627 Malloc9 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.627 Malloc10 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.627 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.884 Malloc11 00:25:22.884 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.884 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:22.884 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.884 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.884 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.884 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:22.884 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.884 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.884 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.884 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:22.884 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.884 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.884 15:43:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.884 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:22.884 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.884 15:43:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:23.448 15:43:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:23.448 15:43:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:23.448 15:43:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:23.448 15:43:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:23.448 15:43:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:25.344 15:43:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:25.344 15:43:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:25.344 15:43:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:25:25.344 15:43:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:25.344 15:43:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:25.344 15:43:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:25.344 15:43:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:25.344 15:43:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:26.276 15:43:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:26.276 15:43:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:26.276 15:43:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:26.276 15:43:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:26.276 15:43:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:28.173 15:43:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:28.173 15:43:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:28.173 15:43:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:25:28.173 15:43:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:28.173 15:43:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:28.173 15:43:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:28.173 15:43:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.173 15:43:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:28.736 15:43:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:28.736 15:43:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:28.736 15:43:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:28.736 15:43:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:28.736 15:43:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:31.259 15:43:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:31.259 15:43:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:31.259 15:43:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:25:31.259 15:43:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:31.259 15:43:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:31.259 15:43:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:31.259 15:43:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.259 15:43:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:31.516 15:43:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:31.516 15:43:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:31.516 15:43:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:31.516 15:43:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:31.516 15:43:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:33.409 15:43:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:33.409 15:43:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:33.409 15:43:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:25:33.409 15:43:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:33.409 15:43:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:33.409 15:43:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:33.409 15:43:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.409 15:43:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:34.342 15:43:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:34.342 15:43:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:34.342 15:43:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:34.342 15:43:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:34.342 15:43:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:36.235 15:43:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:36.235 15:43:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:36.235 15:43:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:25:36.235 15:43:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:36.235 15:43:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:36.235 15:43:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:36.235 15:43:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:36.235 15:43:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:37.166 15:43:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:37.166 15:43:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:37.166 15:43:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:37.166 15:43:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:37.166 15:43:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:39.116 15:43:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:39.116 15:43:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:39.116 15:43:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:25:39.116 15:43:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:39.116 15:43:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:39.116 15:43:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:39.116 15:43:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:39.116 15:43:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:39.679 15:43:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:39.679 15:43:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:39.679 15:43:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:39.679 15:43:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:39.679 15:43:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:41.573 15:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:41.573 15:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:41.573 15:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:25:41.573 15:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:41.573 15:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:41.573 15:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:41.573 15:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:41.573 15:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:42.505 15:43:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:42.505 15:43:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:42.505 15:43:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:42.505 15:43:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:42.505 15:43:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:44.400 15:43:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:44.400 15:43:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:44.400 15:43:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:25:44.400 15:43:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:44.400 15:43:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:44.400 15:43:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:44.400 15:43:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.400 15:43:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:45.330 15:43:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:45.330 15:43:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:45.330 15:43:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:45.330 15:43:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:45.330 15:43:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:47.226 15:44:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:47.226 15:44:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:47.226 15:44:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:25:47.226 15:44:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:47.226 15:44:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:47.226 15:44:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:47.226 15:44:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.226 15:44:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:48.157 15:44:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:48.157 15:44:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:48.157 15:44:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:48.157 15:44:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:48.157 15:44:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:50.064 15:44:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:50.320 15:44:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:50.320 15:44:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:25:50.320 15:44:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:50.320 15:44:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:50.320 15:44:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:50.320 15:44:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.320 15:44:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:50.882 15:44:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:50.882 15:44:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:50.883 15:44:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:50.883 15:44:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:50.883 15:44:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:53.405 15:44:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:53.405 15:44:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:53.405 15:44:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:25:53.405 15:44:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:53.405 15:44:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:53.405 15:44:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:53.405 15:44:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:53.405 [global] 00:25:53.405 thread=1 00:25:53.405 invalidate=1 00:25:53.405 rw=read 00:25:53.405 time_based=1 00:25:53.405 runtime=10 00:25:53.405 ioengine=libaio 00:25:53.405 direct=1 00:25:53.405 bs=262144 00:25:53.405 iodepth=64 00:25:53.405 norandommap=1 00:25:53.405 numjobs=1 00:25:53.405 00:25:53.405 [job0] 00:25:53.405 filename=/dev/nvme0n1 00:25:53.405 [job1] 00:25:53.405 filename=/dev/nvme10n1 00:25:53.405 [job2] 00:25:53.405 filename=/dev/nvme1n1 00:25:53.405 [job3] 00:25:53.405 filename=/dev/nvme2n1 00:25:53.405 [job4] 00:25:53.405 filename=/dev/nvme3n1 00:25:53.405 [job5] 00:25:53.405 filename=/dev/nvme4n1 00:25:53.405 [job6] 00:25:53.405 filename=/dev/nvme5n1 00:25:53.405 [job7] 00:25:53.405 filename=/dev/nvme6n1 00:25:53.405 [job8] 00:25:53.405 filename=/dev/nvme7n1 00:25:53.405 [job9] 00:25:53.405 filename=/dev/nvme8n1 00:25:53.405 [job10] 00:25:53.405 filename=/dev/nvme9n1 00:25:53.405 Could not set queue depth (nvme0n1) 00:25:53.405 Could not set queue depth (nvme10n1) 00:25:53.405 Could not set queue depth (nvme1n1) 00:25:53.405 Could not set queue depth (nvme2n1) 00:25:53.405 Could not set queue depth (nvme3n1) 00:25:53.405 Could not set queue depth (nvme4n1) 00:25:53.405 Could not set queue depth (nvme5n1) 00:25:53.405 Could not set queue depth (nvme6n1) 00:25:53.405 Could not set queue depth (nvme7n1) 00:25:53.405 Could not set queue depth (nvme8n1) 00:25:53.405 Could not set queue depth (nvme9n1) 00:25:53.405 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.405 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.405 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.405 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.405 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.405 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.405 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.405 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.405 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.405 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.405 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.405 fio-3.35 00:25:53.405 Starting 11 threads 00:26:05.644 00:26:05.644 job0: (groupid=0, jobs=1): err= 0: pid=1376768: Wed May 15 15:44:16 2024 00:26:05.644 read: IOPS=494, BW=124MiB/s (130MB/s)(1256MiB/10169msec) 00:26:05.644 slat (usec): min=9, max=231934, avg=1486.13, stdev=11117.32 00:26:05.644 clat (msec): min=2, max=713, avg=127.89, stdev=124.41 00:26:05.644 lat (msec): min=2, max=748, avg=129.38, stdev=126.34 00:26:05.644 clat percentiles (msec): 00:26:05.644 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 8], 20.00th=[ 29], 00:26:05.644 | 30.00th=[ 46], 40.00th=[ 66], 50.00th=[ 89], 60.00th=[ 121], 00:26:05.644 | 70.00th=[ 159], 80.00th=[ 222], 90.00th=[ 288], 95.00th=[ 330], 00:26:05.644 | 99.00th=[ 567], 99.50th=[ 584], 99.90th=[ 600], 99.95th=[ 625], 00:26:05.644 | 99.99th=[ 718] 00:26:05.644 bw ( KiB/s): min=28672, max=378368, per=8.03%, avg=127001.60, stdev=83438.80, samples=20 00:26:05.644 iops : min= 112, max= 1478, avg=496.10, stdev=325.93, samples=20 00:26:05.644 lat (msec) : 4=4.06%, 10=8.28%, 20=2.99%, 50=16.94%, 100=22.69% 00:26:05.644 lat (msec) : 250=29.46%, 500=12.50%, 750=3.09% 00:26:05.644 cpu : usr=0.12%, sys=1.39%, ctx=1090, majf=0, minf=4097 00:26:05.644 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:26:05.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.644 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.644 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.644 job1: (groupid=0, jobs=1): err= 0: pid=1376769: Wed May 15 15:44:16 2024 00:26:05.644 read: IOPS=292, BW=73.2MiB/s (76.8MB/s)(746MiB/10191msec) 00:26:05.644 slat (usec): min=13, max=298397, avg=3247.38, stdev=12050.44 00:26:05.644 clat (usec): min=1428, max=649923, avg=215087.02, stdev=120104.81 00:26:05.644 lat (usec): min=1468, max=695466, avg=218334.40, stdev=121705.49 00:26:05.644 clat percentiles (msec): 00:26:05.644 | 1.00th=[ 9], 5.00th=[ 26], 10.00th=[ 102], 20.00th=[ 126], 00:26:05.644 | 30.00th=[ 150], 40.00th=[ 171], 50.00th=[ 194], 60.00th=[ 226], 00:26:05.644 | 70.00th=[ 262], 80.00th=[ 288], 90.00th=[ 330], 95.00th=[ 518], 00:26:05.644 | 99.00th=[ 592], 99.50th=[ 600], 99.90th=[ 609], 99.95th=[ 617], 00:26:05.644 | 99.99th=[ 651] 00:26:05.644 bw ( KiB/s): min=28160, max=136704, per=4.73%, avg=74752.00, stdev=28332.67, samples=20 00:26:05.645 iops : min= 110, max= 534, avg=292.00, stdev=110.67, samples=20 00:26:05.645 lat (msec) : 2=0.17%, 4=0.07%, 10=1.27%, 20=2.82%, 50=1.54% 00:26:05.645 lat (msec) : 100=3.99%, 250=56.64%, 500=27.82%, 750=5.70% 00:26:05.645 cpu : usr=0.14%, sys=1.14%, ctx=668, majf=0, minf=4097 00:26:05.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:05.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.645 issued rwts: total=2984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.645 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.645 job2: (groupid=0, jobs=1): err= 0: pid=1376770: Wed May 15 15:44:16 2024 00:26:05.645 read: IOPS=477, BW=119MiB/s (125MB/s)(1217MiB/10184msec) 00:26:05.645 slat (usec): min=9, max=517128, avg=1226.17, stdev=11646.51 00:26:05.645 clat (usec): min=1645, max=926139, avg=132541.61, stdev=120541.55 00:26:05.645 lat (usec): min=1664, max=1070.0k, avg=133767.78, stdev=122101.34 00:26:05.645 clat percentiles (msec): 00:26:05.645 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 23], 20.00th=[ 34], 00:26:05.645 | 30.00th=[ 60], 40.00th=[ 85], 50.00th=[ 106], 60.00th=[ 126], 00:26:05.645 | 70.00th=[ 150], 80.00th=[ 207], 90.00th=[ 275], 95.00th=[ 313], 00:26:05.645 | 99.00th=[ 609], 99.50th=[ 709], 99.90th=[ 726], 99.95th=[ 726], 00:26:05.645 | 99.99th=[ 927] 00:26:05.645 bw ( KiB/s): min=32256, max=261632, per=7.78%, avg=122994.45, stdev=57355.37, samples=20 00:26:05.645 iops : min= 126, max= 1022, avg=480.40, stdev=224.05, samples=20 00:26:05.645 lat (msec) : 2=0.12%, 4=1.03%, 10=3.43%, 20=3.53%, 50=19.81% 00:26:05.645 lat (msec) : 100=19.93%, 250=36.76%, 500=12.31%, 750=3.06%, 1000=0.02% 00:26:05.645 cpu : usr=0.21%, sys=1.26%, ctx=1048, majf=0, minf=4097 00:26:05.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:05.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.645 issued rwts: total=4867,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.645 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.645 job3: (groupid=0, jobs=1): err= 0: pid=1376771: Wed May 15 15:44:16 2024 00:26:05.645 read: IOPS=576, BW=144MiB/s (151MB/s)(1468MiB/10180msec) 00:26:05.645 slat (usec): min=9, max=175362, avg=1463.44, stdev=7344.02 00:26:05.645 clat (usec): min=1294, max=643032, avg=109372.03, stdev=114208.98 00:26:05.645 lat (usec): min=1321, max=669959, avg=110835.48, stdev=115982.94 00:26:05.645 clat percentiles (msec): 00:26:05.645 | 1.00th=[ 3], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 31], 00:26:05.645 | 30.00th=[ 34], 40.00th=[ 54], 50.00th=[ 69], 60.00th=[ 84], 00:26:05.645 | 70.00th=[ 113], 80.00th=[ 159], 90.00th=[ 271], 95.00th=[ 330], 00:26:05.645 | 99.00th=[ 558], 99.50th=[ 575], 99.90th=[ 592], 99.95th=[ 600], 00:26:05.645 | 99.99th=[ 642] 00:26:05.645 bw ( KiB/s): min=31744, max=498688, per=9.40%, avg=148710.40, stdev=136077.32, samples=20 00:26:05.645 iops : min= 124, max= 1948, avg=580.90, stdev=531.55, samples=20 00:26:05.645 lat (msec) : 2=0.43%, 4=0.92%, 10=0.34%, 20=0.32%, 50=35.75% 00:26:05.645 lat (msec) : 100=28.51%, 250=20.91%, 500=10.52%, 750=2.30% 00:26:05.645 cpu : usr=0.21%, sys=1.74%, ctx=1166, majf=0, minf=4097 00:26:05.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:26:05.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.645 issued rwts: total=5872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.645 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.645 job4: (groupid=0, jobs=1): err= 0: pid=1376772: Wed May 15 15:44:16 2024 00:26:05.645 read: IOPS=389, BW=97.3MiB/s (102MB/s)(991MiB/10180msec) 00:26:05.645 slat (usec): min=9, max=219681, avg=1953.25, stdev=9615.61 00:26:05.645 clat (usec): min=1101, max=722776, avg=162290.92, stdev=126343.75 00:26:05.645 lat (usec): min=1144, max=743187, avg=164244.17, stdev=128363.88 00:26:05.645 clat percentiles (msec): 00:26:05.645 | 1.00th=[ 9], 5.00th=[ 18], 10.00th=[ 28], 20.00th=[ 68], 00:26:05.645 | 30.00th=[ 89], 40.00th=[ 105], 50.00th=[ 124], 60.00th=[ 153], 00:26:05.645 | 70.00th=[ 205], 80.00th=[ 264], 90.00th=[ 309], 95.00th=[ 447], 00:26:05.645 | 99.00th=[ 575], 99.50th=[ 592], 99.90th=[ 642], 99.95th=[ 651], 00:26:05.645 | 99.99th=[ 726] 00:26:05.645 bw ( KiB/s): min=21504, max=241664, per=6.31%, avg=99814.40, stdev=60886.54, samples=20 00:26:05.645 iops : min= 84, max= 944, avg=389.90, stdev=237.84, samples=20 00:26:05.645 lat (msec) : 2=0.15%, 4=0.05%, 10=1.29%, 20=4.69%, 50=9.61% 00:26:05.645 lat (msec) : 100=21.85%, 250=39.94%, 500=18.50%, 750=3.91% 00:26:05.645 cpu : usr=0.20%, sys=1.07%, ctx=922, majf=0, minf=3721 00:26:05.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:05.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.645 issued rwts: total=3963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.645 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.645 job5: (groupid=0, jobs=1): err= 0: pid=1376773: Wed May 15 15:44:16 2024 00:26:05.645 read: IOPS=368, BW=92.1MiB/s (96.5MB/s)(937MiB/10177msec) 00:26:05.645 slat (usec): min=9, max=200968, avg=2437.81, stdev=11308.02 00:26:05.645 clat (msec): min=2, max=632, avg=171.17, stdev=125.00 00:26:05.645 lat (msec): min=2, max=726, avg=173.61, stdev=127.36 00:26:05.645 clat percentiles (msec): 00:26:05.645 | 1.00th=[ 7], 5.00th=[ 19], 10.00th=[ 29], 20.00th=[ 53], 00:26:05.645 | 30.00th=[ 89], 40.00th=[ 126], 50.00th=[ 157], 60.00th=[ 180], 00:26:05.645 | 70.00th=[ 228], 80.00th=[ 257], 90.00th=[ 292], 95.00th=[ 460], 00:26:05.645 | 99.00th=[ 558], 99.50th=[ 575], 99.90th=[ 625], 99.95th=[ 625], 00:26:05.645 | 99.99th=[ 634] 00:26:05.645 bw ( KiB/s): min=31232, max=319102, per=5.96%, avg=94342.30, stdev=77273.87, samples=20 00:26:05.645 iops : min= 122, max= 1246, avg=368.50, stdev=301.78, samples=20 00:26:05.645 lat (msec) : 4=0.08%, 10=1.39%, 20=4.88%, 50=13.02%, 100=12.49% 00:26:05.645 lat (msec) : 250=44.72%, 500=18.97%, 750=4.46% 00:26:05.645 cpu : usr=0.20%, sys=1.13%, ctx=652, majf=0, minf=4097 00:26:05.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:05.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.645 issued rwts: total=3748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.645 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.645 job6: (groupid=0, jobs=1): err= 0: pid=1376774: Wed May 15 15:44:16 2024 00:26:05.645 read: IOPS=365, BW=91.3MiB/s (95.8MB/s)(931MiB/10191msec) 00:26:05.645 slat (usec): min=9, max=275146, avg=2399.73, stdev=11335.46 00:26:05.645 clat (msec): min=5, max=717, avg=172.57, stdev=128.62 00:26:05.645 lat (msec): min=5, max=873, avg=174.97, stdev=130.76 00:26:05.645 clat percentiles (msec): 00:26:05.645 | 1.00th=[ 11], 5.00th=[ 20], 10.00th=[ 32], 20.00th=[ 67], 00:26:05.645 | 30.00th=[ 93], 40.00th=[ 121], 50.00th=[ 146], 60.00th=[ 171], 00:26:05.645 | 70.00th=[ 215], 80.00th=[ 266], 90.00th=[ 317], 95.00th=[ 430], 00:26:05.645 | 99.00th=[ 600], 99.50th=[ 600], 99.90th=[ 684], 99.95th=[ 718], 00:26:05.645 | 99.99th=[ 718] 00:26:05.645 bw ( KiB/s): min=23552, max=223744, per=5.92%, avg=93670.40, stdev=54487.27, samples=20 00:26:05.645 iops : min= 92, max= 874, avg=365.90, stdev=212.84, samples=20 00:26:05.645 lat (msec) : 10=0.59%, 20=4.54%, 50=10.50%, 100=17.89%, 250=42.73% 00:26:05.646 lat (msec) : 500=19.58%, 750=4.16% 00:26:05.646 cpu : usr=0.17%, sys=1.26%, ctx=789, majf=0, minf=4097 00:26:05.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:05.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.646 issued rwts: total=3723,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.646 job7: (groupid=0, jobs=1): err= 0: pid=1376780: Wed May 15 15:44:16 2024 00:26:05.646 read: IOPS=630, BW=158MiB/s (165MB/s)(1580MiB/10022msec) 00:26:05.646 slat (usec): min=9, max=469028, avg=1091.79, stdev=7559.08 00:26:05.646 clat (usec): min=900, max=683482, avg=100288.52, stdev=82554.75 00:26:05.646 lat (usec): min=928, max=1038.8k, avg=101380.30, stdev=83636.65 00:26:05.646 clat percentiles (msec): 00:26:05.646 | 1.00th=[ 7], 5.00th=[ 19], 10.00th=[ 31], 20.00th=[ 54], 00:26:05.646 | 30.00th=[ 67], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 88], 00:26:05.646 | 70.00th=[ 104], 80.00th=[ 124], 90.00th=[ 186], 95.00th=[ 247], 00:26:05.646 | 99.00th=[ 523], 99.50th=[ 542], 99.90th=[ 667], 99.95th=[ 676], 00:26:05.646 | 99.99th=[ 684] 00:26:05.646 bw ( KiB/s): min=68608, max=283648, per=10.13%, avg=160220.80, stdev=59438.56, samples=20 00:26:05.646 iops : min= 268, max= 1108, avg=625.85, stdev=232.18, samples=20 00:26:05.646 lat (usec) : 1000=0.02% 00:26:05.646 lat (msec) : 2=0.38%, 4=0.30%, 10=1.41%, 20=3.56%, 50=11.91% 00:26:05.646 lat (msec) : 100=50.31%, 250=27.42%, 500=3.26%, 750=1.44% 00:26:05.646 cpu : usr=0.27%, sys=1.82%, ctx=1200, majf=0, minf=4097 00:26:05.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:05.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.646 issued rwts: total=6321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.646 job8: (groupid=0, jobs=1): err= 0: pid=1376786: Wed May 15 15:44:16 2024 00:26:05.646 read: IOPS=882, BW=221MiB/s (231MB/s)(2246MiB/10179msec) 00:26:05.646 slat (usec): min=9, max=240411, avg=773.69, stdev=5211.47 00:26:05.646 clat (usec): min=830, max=434010, avg=71649.94, stdev=68684.20 00:26:05.646 lat (usec): min=886, max=492471, avg=72423.62, stdev=69434.69 00:26:05.646 clat percentiles (msec): 00:26:05.646 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 24], 20.00th=[ 29], 00:26:05.646 | 30.00th=[ 31], 40.00th=[ 34], 50.00th=[ 45], 60.00th=[ 57], 00:26:05.646 | 70.00th=[ 72], 80.00th=[ 110], 90.00th=[ 165], 95.00th=[ 251], 00:26:05.646 | 99.00th=[ 313], 99.50th=[ 338], 99.90th=[ 418], 99.95th=[ 426], 00:26:05.646 | 99.99th=[ 435] 00:26:05.646 bw ( KiB/s): min=55808, max=503808, per=14.44%, avg=228364.35, stdev=133857.58, samples=20 00:26:05.646 iops : min= 218, max= 1968, avg=892.00, stdev=522.92, samples=20 00:26:05.646 lat (usec) : 1000=0.11% 00:26:05.646 lat (msec) : 2=0.17%, 4=0.70%, 10=2.49%, 20=4.84%, 50=46.08% 00:26:05.646 lat (msec) : 100=23.35%, 250=17.39%, 500=4.86% 00:26:05.646 cpu : usr=0.44%, sys=2.67%, ctx=1788, majf=0, minf=4097 00:26:05.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:05.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.646 issued rwts: total=8984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.646 job9: (groupid=0, jobs=1): err= 0: pid=1376787: Wed May 15 15:44:16 2024 00:26:05.646 read: IOPS=720, BW=180MiB/s (189MB/s)(1805MiB/10019msec) 00:26:05.646 slat (usec): min=10, max=209863, avg=871.93, stdev=4540.82 00:26:05.646 clat (usec): min=985, max=698300, avg=87886.90, stdev=88439.18 00:26:05.646 lat (usec): min=1042, max=698374, avg=88758.83, stdev=88880.64 00:26:05.646 clat percentiles (msec): 00:26:05.646 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 22], 20.00th=[ 45], 00:26:05.646 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 79], 00:26:05.646 | 70.00th=[ 93], 80.00th=[ 113], 90.00th=[ 142], 95.00th=[ 188], 00:26:05.646 | 99.00th=[ 550], 99.50th=[ 600], 99.90th=[ 676], 99.95th=[ 676], 00:26:05.646 | 99.99th=[ 701] 00:26:05.646 bw ( KiB/s): min=25088, max=357888, per=11.58%, avg=183168.00, stdev=92521.56, samples=20 00:26:05.646 iops : min= 98, max= 1398, avg=715.50, stdev=361.41, samples=20 00:26:05.646 lat (usec) : 1000=0.01% 00:26:05.646 lat (msec) : 2=0.32%, 4=0.86%, 10=3.48%, 20=4.52%, 50=16.82% 00:26:05.646 lat (msec) : 100=47.78%, 250=22.69%, 500=1.62%, 750=1.90% 00:26:05.646 cpu : usr=0.35%, sys=1.91%, ctx=1434, majf=0, minf=4097 00:26:05.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:26:05.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.646 issued rwts: total=7218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.646 job10: (groupid=0, jobs=1): err= 0: pid=1376788: Wed May 15 15:44:16 2024 00:26:05.646 read: IOPS=1023, BW=256MiB/s (268MB/s)(2565MiB/10029msec) 00:26:05.646 slat (usec): min=9, max=63065, avg=764.84, stdev=2860.95 00:26:05.646 clat (usec): min=1658, max=248603, avg=61733.23, stdev=37148.03 00:26:05.646 lat (usec): min=1681, max=248629, avg=62498.06, stdev=37574.25 00:26:05.646 clat percentiles (msec): 00:26:05.646 | 1.00th=[ 7], 5.00th=[ 26], 10.00th=[ 28], 20.00th=[ 31], 00:26:05.646 | 30.00th=[ 34], 40.00th=[ 41], 50.00th=[ 51], 60.00th=[ 65], 00:26:05.646 | 70.00th=[ 78], 80.00th=[ 90], 90.00th=[ 111], 95.00th=[ 136], 00:26:05.646 | 99.00th=[ 174], 99.50th=[ 215], 99.90th=[ 234], 99.95th=[ 243], 00:26:05.646 | 99.99th=[ 249] 00:26:05.646 bw ( KiB/s): min=120320, max=502272, per=16.50%, avg=261043.20, stdev=119569.18, samples=20 00:26:05.646 iops : min= 470, max= 1962, avg=1019.70, stdev=467.07, samples=20 00:26:05.646 lat (msec) : 2=0.01%, 4=0.15%, 10=1.36%, 20=1.84%, 50=46.08% 00:26:05.646 lat (msec) : 100=35.91%, 250=14.65% 00:26:05.646 cpu : usr=0.52%, sys=2.86%, ctx=1845, majf=0, minf=4097 00:26:05.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:26:05.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.646 issued rwts: total=10260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.646 00:26:05.646 Run status group 0 (all jobs): 00:26:05.646 READ: bw=1545MiB/s (1620MB/s), 73.2MiB/s-256MiB/s (76.8MB/s-268MB/s), io=15.4GiB (16.5GB), run=10019-10191msec 00:26:05.646 00:26:05.646 Disk stats (read/write): 00:26:05.646 nvme0n1: ios=9738/0, merge=0/0, ticks=1214141/0, in_queue=1214141, util=97.17% 00:26:05.646 nvme10n1: ios=5905/0, merge=0/0, ticks=1247234/0, in_queue=1247234, util=97.45% 00:26:05.646 nvme1n1: ios=9435/0, merge=0/0, ticks=1246115/0, in_queue=1246115, util=97.65% 00:26:05.646 nvme2n1: ios=11590/0, merge=0/0, ticks=1221590/0, in_queue=1221590, util=97.79% 00:26:05.646 nvme3n1: ios=7785/0, merge=0/0, ticks=1222406/0, in_queue=1222406, util=97.85% 00:26:05.646 nvme4n1: ios=7368/0, merge=0/0, ticks=1227405/0, in_queue=1227405, util=98.17% 00:26:05.646 nvme5n1: ios=7395/0, merge=0/0, ticks=1258101/0, in_queue=1258101, util=98.37% 00:26:05.646 nvme6n1: ios=12395/0, merge=0/0, ticks=1242791/0, in_queue=1242791, util=98.43% 00:26:05.646 nvme7n1: ios=17841/0, merge=0/0, ticks=1233818/0, in_queue=1233818, util=98.88% 00:26:05.646 nvme8n1: ios=14150/0, merge=0/0, ticks=1246361/0, in_queue=1246361, util=99.07% 00:26:05.646 nvme9n1: ios=20277/0, merge=0/0, ticks=1239427/0, in_queue=1239427, util=99.20% 00:26:05.646 15:44:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:05.646 [global] 00:26:05.646 thread=1 00:26:05.646 invalidate=1 00:26:05.646 rw=randwrite 00:26:05.646 time_based=1 00:26:05.647 runtime=10 00:26:05.647 ioengine=libaio 00:26:05.647 direct=1 00:26:05.647 bs=262144 00:26:05.647 iodepth=64 00:26:05.647 norandommap=1 00:26:05.647 numjobs=1 00:26:05.647 00:26:05.647 [job0] 00:26:05.647 filename=/dev/nvme0n1 00:26:05.647 [job1] 00:26:05.647 filename=/dev/nvme10n1 00:26:05.647 [job2] 00:26:05.647 filename=/dev/nvme1n1 00:26:05.647 [job3] 00:26:05.647 filename=/dev/nvme2n1 00:26:05.647 [job4] 00:26:05.647 filename=/dev/nvme3n1 00:26:05.647 [job5] 00:26:05.647 filename=/dev/nvme4n1 00:26:05.647 [job6] 00:26:05.647 filename=/dev/nvme5n1 00:26:05.647 [job7] 00:26:05.647 filename=/dev/nvme6n1 00:26:05.647 [job8] 00:26:05.647 filename=/dev/nvme7n1 00:26:05.647 [job9] 00:26:05.647 filename=/dev/nvme8n1 00:26:05.647 [job10] 00:26:05.647 filename=/dev/nvme9n1 00:26:05.647 Could not set queue depth (nvme0n1) 00:26:05.647 Could not set queue depth (nvme10n1) 00:26:05.647 Could not set queue depth (nvme1n1) 00:26:05.647 Could not set queue depth (nvme2n1) 00:26:05.647 Could not set queue depth (nvme3n1) 00:26:05.647 Could not set queue depth (nvme4n1) 00:26:05.647 Could not set queue depth (nvme5n1) 00:26:05.647 Could not set queue depth (nvme6n1) 00:26:05.647 Could not set queue depth (nvme7n1) 00:26:05.647 Could not set queue depth (nvme8n1) 00:26:05.647 Could not set queue depth (nvme9n1) 00:26:05.647 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:05.647 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:05.647 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:05.647 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:05.647 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:05.647 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:05.647 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:05.647 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:05.647 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:05.647 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:05.647 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:05.647 fio-3.35 00:26:05.647 Starting 11 threads 00:26:15.614 00:26:15.614 job0: (groupid=0, jobs=1): err= 0: pid=1377810: Wed May 15 15:44:27 2024 00:26:15.614 write: IOPS=459, BW=115MiB/s (121MB/s)(1169MiB/10168msec); 0 zone resets 00:26:15.614 slat (usec): min=16, max=206363, avg=1741.23, stdev=5039.68 00:26:15.614 clat (usec): min=1058, max=488615, avg=137210.09, stdev=81516.13 00:26:15.614 lat (usec): min=1109, max=515565, avg=138951.32, stdev=82401.94 00:26:15.614 clat percentiles (msec): 00:26:15.614 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 61], 20.00th=[ 80], 00:26:15.614 | 30.00th=[ 82], 40.00th=[ 89], 50.00th=[ 117], 60.00th=[ 155], 00:26:15.614 | 70.00th=[ 178], 80.00th=[ 205], 90.00th=[ 236], 95.00th=[ 288], 00:26:15.614 | 99.00th=[ 384], 99.50th=[ 430], 99.90th=[ 481], 99.95th=[ 481], 00:26:15.614 | 99.99th=[ 489] 00:26:15.614 bw ( KiB/s): min=60416, max=230912, per=9.18%, avg=118113.25, stdev=49227.64, samples=20 00:26:15.614 iops : min= 236, max= 902, avg=461.30, stdev=192.29, samples=20 00:26:15.614 lat (msec) : 2=0.15%, 4=0.79%, 10=2.74%, 20=1.60%, 50=3.06% 00:26:15.614 lat (msec) : 100=37.52%, 250=46.35%, 500=7.78% 00:26:15.614 cpu : usr=1.47%, sys=1.52%, ctx=2021, majf=0, minf=1 00:26:15.614 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:15.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:15.614 issued rwts: total=0,4677,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:15.614 job1: (groupid=0, jobs=1): err= 0: pid=1377824: Wed May 15 15:44:27 2024 00:26:15.614 write: IOPS=388, BW=97.1MiB/s (102MB/s)(980MiB/10089msec); 0 zone resets 00:26:15.614 slat (usec): min=25, max=93404, avg=1950.99, stdev=5244.30 00:26:15.614 clat (usec): min=1160, max=487500, avg=162715.77, stdev=99436.01 00:26:15.614 lat (usec): min=1214, max=494516, avg=164666.76, stdev=100790.05 00:26:15.614 clat percentiles (msec): 00:26:15.614 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 23], 20.00th=[ 57], 00:26:15.614 | 30.00th=[ 114], 40.00th=[ 148], 50.00th=[ 171], 60.00th=[ 182], 00:26:15.614 | 70.00th=[ 209], 80.00th=[ 234], 90.00th=[ 275], 95.00th=[ 359], 00:26:15.614 | 99.00th=[ 409], 99.50th=[ 414], 99.90th=[ 481], 99.95th=[ 489], 00:26:15.614 | 99.99th=[ 489] 00:26:15.614 bw ( KiB/s): min=40960, max=184320, per=7.67%, avg=98680.10, stdev=39281.09, samples=20 00:26:15.614 iops : min= 160, max= 720, avg=385.45, stdev=153.45, samples=20 00:26:15.614 lat (msec) : 2=0.20%, 4=0.48%, 10=2.60%, 20=5.26%, 50=10.16% 00:26:15.614 lat (msec) : 100=9.19%, 250=55.62%, 500=16.49% 00:26:15.614 cpu : usr=1.55%, sys=1.22%, ctx=2190, majf=0, minf=1 00:26:15.614 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:15.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:15.614 issued rwts: total=0,3918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:15.614 job2: (groupid=0, jobs=1): err= 0: pid=1377825: Wed May 15 15:44:27 2024 00:26:15.614 write: IOPS=461, BW=115MiB/s (121MB/s)(1174MiB/10172msec); 0 zone resets 00:26:15.614 slat (usec): min=15, max=80970, avg=1633.69, stdev=4908.90 00:26:15.614 clat (usec): min=1212, max=481768, avg=136758.74, stdev=101321.58 00:26:15.614 lat (usec): min=1241, max=481824, avg=138392.44, stdev=102635.25 00:26:15.614 clat percentiles (msec): 00:26:15.614 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 16], 20.00th=[ 54], 00:26:15.614 | 30.00th=[ 81], 40.00th=[ 91], 50.00th=[ 117], 60.00th=[ 153], 00:26:15.614 | 70.00th=[ 171], 80.00th=[ 199], 90.00th=[ 262], 95.00th=[ 347], 00:26:15.614 | 99.00th=[ 472], 99.50th=[ 477], 99.90th=[ 481], 99.95th=[ 481], 00:26:15.614 | 99.99th=[ 481] 00:26:15.614 bw ( KiB/s): min=38912, max=319488, per=9.21%, avg=118550.60, stdev=67341.57, samples=20 00:26:15.614 iops : min= 152, max= 1248, avg=463.05, stdev=263.07, samples=20 00:26:15.614 lat (msec) : 2=0.30%, 4=2.24%, 10=4.37%, 20=4.67%, 50=7.65% 00:26:15.614 lat (msec) : 100=24.76%, 250=43.97%, 500=12.06% 00:26:15.614 cpu : usr=1.44%, sys=1.53%, ctx=2601, majf=0, minf=1 00:26:15.614 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:15.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:15.614 issued rwts: total=0,4694,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:15.614 job3: (groupid=0, jobs=1): err= 0: pid=1377826: Wed May 15 15:44:27 2024 00:26:15.614 write: IOPS=343, BW=85.8MiB/s (90.0MB/s)(873MiB/10172msec); 0 zone resets 00:26:15.614 slat (usec): min=24, max=91655, avg=2443.32, stdev=5996.37 00:26:15.614 clat (msec): min=3, max=479, avg=183.79, stdev=91.70 00:26:15.614 lat (msec): min=3, max=480, avg=186.23, stdev=92.98 00:26:15.614 clat percentiles (msec): 00:26:15.614 | 1.00th=[ 20], 5.00th=[ 47], 10.00th=[ 69], 20.00th=[ 87], 00:26:15.614 | 30.00th=[ 132], 40.00th=[ 176], 50.00th=[ 190], 60.00th=[ 203], 00:26:15.614 | 70.00th=[ 215], 80.00th=[ 241], 90.00th=[ 321], 95.00th=[ 368], 00:26:15.614 | 99.00th=[ 405], 99.50th=[ 418], 99.90th=[ 477], 99.95th=[ 481], 00:26:15.614 | 99.99th=[ 481] 00:26:15.614 bw ( KiB/s): min=34816, max=208384, per=6.82%, avg=87762.30, stdev=38778.55, samples=20 00:26:15.614 iops : min= 136, max= 814, avg=342.80, stdev=151.46, samples=20 00:26:15.614 lat (msec) : 4=0.03%, 10=0.34%, 20=0.74%, 50=4.67%, 100=17.90% 00:26:15.614 lat (msec) : 250=58.98%, 500=17.33% 00:26:15.614 cpu : usr=1.15%, sys=1.18%, ctx=1506, majf=0, minf=1 00:26:15.614 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:15.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:15.614 issued rwts: total=0,3491,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:15.614 job4: (groupid=0, jobs=1): err= 0: pid=1377827: Wed May 15 15:44:27 2024 00:26:15.614 write: IOPS=516, BW=129MiB/s (135MB/s)(1313MiB/10168msec); 0 zone resets 00:26:15.614 slat (usec): min=16, max=89248, avg=1366.97, stdev=3807.35 00:26:15.614 clat (usec): min=1030, max=361364, avg=122477.51, stdev=74352.13 00:26:15.614 lat (usec): min=1064, max=376970, avg=123844.48, stdev=75050.31 00:26:15.614 clat percentiles (msec): 00:26:15.614 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 42], 20.00th=[ 69], 00:26:15.614 | 30.00th=[ 82], 40.00th=[ 90], 50.00th=[ 99], 60.00th=[ 121], 00:26:15.614 | 70.00th=[ 157], 80.00th=[ 194], 90.00th=[ 241], 95.00th=[ 259], 00:26:15.614 | 99.00th=[ 305], 99.50th=[ 317], 99.90th=[ 326], 99.95th=[ 330], 00:26:15.614 | 99.99th=[ 363] 00:26:15.615 bw ( KiB/s): min=69632, max=196608, per=10.32%, avg=132797.45, stdev=43027.32, samples=20 00:26:15.615 iops : min= 272, max= 768, avg=518.70, stdev=168.06, samples=20 00:26:15.615 lat (msec) : 2=0.44%, 4=2.11%, 10=2.74%, 20=1.75%, 50=9.24% 00:26:15.615 lat (msec) : 100=35.25%, 250=41.10%, 500=7.37% 00:26:15.615 cpu : usr=1.70%, sys=1.58%, ctx=2652, majf=0, minf=1 00:26:15.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:15.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:15.615 issued rwts: total=0,5251,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:15.615 job5: (groupid=0, jobs=1): err= 0: pid=1377828: Wed May 15 15:44:27 2024 00:26:15.615 write: IOPS=473, BW=118MiB/s (124MB/s)(1203MiB/10163msec); 0 zone resets 00:26:15.615 slat (usec): min=16, max=62927, avg=1294.01, stdev=4187.17 00:26:15.615 clat (usec): min=883, max=415071, avg=133381.67, stdev=94032.04 00:26:15.615 lat (usec): min=918, max=421114, avg=134675.69, stdev=95053.45 00:26:15.615 clat percentiles (msec): 00:26:15.615 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 27], 20.00th=[ 51], 00:26:15.615 | 30.00th=[ 77], 40.00th=[ 85], 50.00th=[ 105], 60.00th=[ 146], 00:26:15.615 | 70.00th=[ 188], 80.00th=[ 213], 90.00th=[ 249], 95.00th=[ 321], 00:26:15.615 | 99.00th=[ 397], 99.50th=[ 401], 99.90th=[ 409], 99.95th=[ 414], 00:26:15.615 | 99.99th=[ 414] 00:26:15.615 bw ( KiB/s): min=43008, max=202752, per=9.45%, avg=121596.70, stdev=52544.89, samples=20 00:26:15.615 iops : min= 168, max= 792, avg=474.95, stdev=205.30, samples=20 00:26:15.615 lat (usec) : 1000=0.08% 00:26:15.615 lat (msec) : 2=0.46%, 4=1.31%, 10=2.95%, 20=2.91%, 50=11.99% 00:26:15.615 lat (msec) : 100=28.61%, 250=41.95%, 500=9.74% 00:26:15.615 cpu : usr=1.41%, sys=1.69%, ctx=2908, majf=0, minf=1 00:26:15.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:15.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:15.615 issued rwts: total=0,4813,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:15.615 job6: (groupid=0, jobs=1): err= 0: pid=1377829: Wed May 15 15:44:27 2024 00:26:15.615 write: IOPS=632, BW=158MiB/s (166MB/s)(1605MiB/10153msec); 0 zone resets 00:26:15.615 slat (usec): min=17, max=48627, avg=1152.95, stdev=3473.63 00:26:15.615 clat (usec): min=924, max=365759, avg=100004.52, stdev=81005.72 00:26:15.615 lat (usec): min=993, max=368769, avg=101157.47, stdev=82057.38 00:26:15.615 clat percentiles (msec): 00:26:15.615 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 28], 20.00th=[ 41], 00:26:15.615 | 30.00th=[ 43], 40.00th=[ 46], 50.00th=[ 67], 60.00th=[ 83], 00:26:15.615 | 70.00th=[ 126], 80.00th=[ 184], 90.00th=[ 230], 95.00th=[ 259], 00:26:15.615 | 99.00th=[ 313], 99.50th=[ 321], 99.90th=[ 351], 99.95th=[ 355], 00:26:15.615 | 99.99th=[ 368] 00:26:15.615 bw ( KiB/s): min=55808, max=360960, per=12.64%, avg=162688.80, stdev=98418.17, samples=20 00:26:15.615 iops : min= 218, max= 1410, avg=635.45, stdev=384.45, samples=20 00:26:15.615 lat (usec) : 1000=0.03% 00:26:15.615 lat (msec) : 2=0.34%, 4=1.20%, 10=2.62%, 20=3.66%, 50=34.96% 00:26:15.615 lat (msec) : 100=21.95%, 250=28.74%, 500=6.50% 00:26:15.615 cpu : usr=1.94%, sys=2.07%, ctx=3167, majf=0, minf=1 00:26:15.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:26:15.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:15.615 issued rwts: total=0,6419,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:15.615 job7: (groupid=0, jobs=1): err= 0: pid=1377830: Wed May 15 15:44:27 2024 00:26:15.615 write: IOPS=406, BW=102MiB/s (107MB/s)(1035MiB/10172msec); 0 zone resets 00:26:15.615 slat (usec): min=20, max=67499, avg=2020.33, stdev=4932.53 00:26:15.615 clat (msec): min=3, max=442, avg=155.02, stdev=80.35 00:26:15.615 lat (msec): min=3, max=442, avg=157.04, stdev=81.35 00:26:15.615 clat percentiles (msec): 00:26:15.615 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 66], 20.00th=[ 81], 00:26:15.615 | 30.00th=[ 90], 40.00th=[ 130], 50.00th=[ 157], 60.00th=[ 174], 00:26:15.615 | 70.00th=[ 205], 80.00th=[ 228], 90.00th=[ 257], 95.00th=[ 288], 00:26:15.615 | 99.00th=[ 368], 99.50th=[ 388], 99.90th=[ 426], 99.95th=[ 426], 00:26:15.615 | 99.99th=[ 443] 00:26:15.615 bw ( KiB/s): min=57344, max=204800, per=8.11%, avg=104367.15, stdev=41184.56, samples=20 00:26:15.615 iops : min= 224, max= 800, avg=407.60, stdev=160.85, samples=20 00:26:15.615 lat (msec) : 4=0.14%, 10=1.64%, 20=2.13%, 50=3.91%, 100=25.65% 00:26:15.615 lat (msec) : 250=54.08%, 500=12.44% 00:26:15.615 cpu : usr=1.31%, sys=1.40%, ctx=1828, majf=0, minf=1 00:26:15.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:15.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:15.615 issued rwts: total=0,4140,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:15.615 job8: (groupid=0, jobs=1): err= 0: pid=1377831: Wed May 15 15:44:27 2024 00:26:15.615 write: IOPS=482, BW=121MiB/s (126MB/s)(1216MiB/10086msec); 0 zone resets 00:26:15.615 slat (usec): min=24, max=46731, avg=1838.94, stdev=4542.70 00:26:15.615 clat (msec): min=2, max=432, avg=130.73, stdev=90.70 00:26:15.615 lat (msec): min=3, max=433, avg=132.57, stdev=92.03 00:26:15.615 clat percentiles (msec): 00:26:15.615 | 1.00th=[ 15], 5.00th=[ 35], 10.00th=[ 41], 20.00th=[ 55], 00:26:15.615 | 30.00th=[ 74], 40.00th=[ 82], 50.00th=[ 91], 60.00th=[ 140], 00:26:15.615 | 70.00th=[ 169], 80.00th=[ 205], 90.00th=[ 243], 95.00th=[ 338], 00:26:15.615 | 99.00th=[ 414], 99.50th=[ 418], 99.90th=[ 426], 99.95th=[ 435], 00:26:15.615 | 99.99th=[ 435] 00:26:15.615 bw ( KiB/s): min=38912, max=302080, per=9.55%, avg=122877.00, stdev=70394.64, samples=20 00:26:15.615 iops : min= 152, max= 1180, avg=479.95, stdev=274.99, samples=20 00:26:15.615 lat (msec) : 4=0.04%, 10=0.41%, 20=1.19%, 50=15.65%, 100=35.33% 00:26:15.615 lat (msec) : 250=38.52%, 500=8.86% 00:26:15.615 cpu : usr=1.72%, sys=1.53%, ctx=1983, majf=0, minf=1 00:26:15.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:15.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:15.615 issued rwts: total=0,4863,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:15.615 job9: (groupid=0, jobs=1): err= 0: pid=1377832: Wed May 15 15:44:27 2024 00:26:15.615 write: IOPS=473, BW=118MiB/s (124MB/s)(1204MiB/10167msec); 0 zone resets 00:26:15.615 slat (usec): min=23, max=68114, avg=1779.84, stdev=4581.86 00:26:15.615 clat (usec): min=1265, max=412879, avg=133259.03, stdev=96735.69 00:26:15.615 lat (usec): min=1312, max=424286, avg=135038.87, stdev=97984.07 00:26:15.615 clat percentiles (msec): 00:26:15.615 | 1.00th=[ 5], 5.00th=[ 19], 10.00th=[ 42], 20.00th=[ 62], 00:26:15.615 | 30.00th=[ 77], 40.00th=[ 82], 50.00th=[ 89], 60.00th=[ 118], 00:26:15.615 | 70.00th=[ 174], 80.00th=[ 213], 90.00th=[ 288], 95.00th=[ 342], 00:26:15.615 | 99.00th=[ 393], 99.50th=[ 401], 99.90th=[ 409], 99.95th=[ 409], 00:26:15.615 | 99.99th=[ 414] 00:26:15.615 bw ( KiB/s): min=45056, max=232400, per=9.45%, avg=121669.15, stdev=62210.99, samples=20 00:26:15.615 iops : min= 176, max= 907, avg=475.20, stdev=242.97, samples=20 00:26:15.615 lat (msec) : 2=0.04%, 4=0.85%, 10=1.72%, 20=2.93%, 50=7.95% 00:26:15.615 lat (msec) : 100=41.64%, 250=30.99%, 500=13.87% 00:26:15.615 cpu : usr=1.76%, sys=1.55%, ctx=2039, majf=0, minf=1 00:26:15.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:15.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:15.615 issued rwts: total=0,4815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:15.615 job10: (groupid=0, jobs=1): err= 0: pid=1377833: Wed May 15 15:44:27 2024 00:26:15.615 write: IOPS=398, BW=99.6MiB/s (104MB/s)(1013MiB/10171msec); 0 zone resets 00:26:15.615 slat (usec): min=16, max=213432, avg=1960.10, stdev=5935.64 00:26:15.615 clat (usec): min=845, max=497948, avg=158657.15, stdev=90419.77 00:26:15.615 lat (usec): min=892, max=497998, avg=160617.25, stdev=91635.04 00:26:15.615 clat percentiles (msec): 00:26:15.615 | 1.00th=[ 3], 5.00th=[ 13], 10.00th=[ 28], 20.00th=[ 90], 00:26:15.615 | 30.00th=[ 97], 40.00th=[ 133], 50.00th=[ 163], 60.00th=[ 190], 00:26:15.615 | 70.00th=[ 209], 80.00th=[ 224], 90.00th=[ 259], 95.00th=[ 321], 00:26:15.615 | 99.00th=[ 388], 99.50th=[ 435], 99.90th=[ 456], 99.95th=[ 493], 00:26:15.615 | 99.99th=[ 498] 00:26:15.615 bw ( KiB/s): min=49152, max=209920, per=7.93%, avg=102059.55, stdev=45338.69, samples=20 00:26:15.615 iops : min= 192, max= 820, avg=398.65, stdev=177.06, samples=20 00:26:15.615 lat (usec) : 1000=0.05% 00:26:15.615 lat (msec) : 2=0.54%, 4=1.14%, 10=2.20%, 20=3.51%, 50=6.54% 00:26:15.615 lat (msec) : 100=19.18%, 250=54.26%, 500=12.59% 00:26:15.615 cpu : usr=1.25%, sys=1.26%, ctx=2063, majf=0, minf=1 00:26:15.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:15.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:15.615 issued rwts: total=0,4051,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:15.615 00:26:15.615 Run status group 0 (all jobs): 00:26:15.615 WRITE: bw=1257MiB/s (1318MB/s), 85.8MiB/s-158MiB/s (90.0MB/s-166MB/s), io=12.5GiB (13.4GB), run=10086-10172msec 00:26:15.615 00:26:15.615 Disk stats (read/write): 00:26:15.615 nvme0n1: ios=42/9334, merge=0/0, ticks=963/1237768, in_queue=1238731, util=100.00% 00:26:15.615 nvme10n1: ios=43/7612, merge=0/0, ticks=1072/1210339, in_queue=1211411, util=100.00% 00:26:15.615 nvme1n1: ios=35/9363, merge=0/0, ticks=767/1238047, in_queue=1238814, util=100.00% 00:26:15.615 nvme2n1: ios=38/6957, merge=0/0, ticks=484/1234949, in_queue=1235433, util=100.00% 00:26:15.615 nvme3n1: ios=0/10470, merge=0/0, ticks=0/1241055, in_queue=1241055, util=97.58% 00:26:15.615 nvme4n1: ios=53/9603, merge=0/0, ticks=1498/1242340, in_queue=1243838, util=100.00% 00:26:15.615 nvme5n1: ios=0/12825, merge=0/0, ticks=0/1241046, in_queue=1241046, util=98.10% 00:26:15.615 nvme6n1: ios=40/8256, merge=0/0, ticks=713/1236253, in_queue=1236966, util=100.00% 00:26:15.615 nvme7n1: ios=43/9507, merge=0/0, ticks=512/1203522, in_queue=1204034, util=100.00% 00:26:15.615 nvme8n1: ios=0/9603, merge=0/0, ticks=0/1235348, in_queue=1235348, util=98.88% 00:26:15.615 nvme9n1: ios=0/8077, merge=0/0, ticks=0/1239400, in_queue=1239400, util=99.08% 00:26:15.615 15:44:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:15.615 15:44:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:15.615 15:44:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.616 15:44:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:15.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:15.616 15:44:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:15.616 15:44:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:15.616 15:44:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:15.616 15:44:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:26:15.616 15:44:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:15.616 15:44:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:26:15.616 15:44:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:15.616 15:44:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:15.616 15:44:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.616 15:44:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:15.616 15:44:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.616 15:44:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.616 15:44:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:15.616 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:15.616 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.616 15:44:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:16.184 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:16.184 15:44:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:16.184 15:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:16.184 15:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:16.184 15:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:26:16.184 15:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:16.184 15:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:26:16.184 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:16.184 15:44:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:16.184 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.184 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:16.184 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.184 15:44:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:16.184 15:44:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:16.184 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:16.184 15:44:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:16.184 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:16.184 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:16.184 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:26:16.184 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:16.184 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:26:16.184 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:16.184 15:44:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:16.184 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.184 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:16.184 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.184 15:44:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:16.184 15:44:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:16.444 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:16.444 15:44:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:16.444 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:16.444 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:16.444 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:26:16.444 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:16.444 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:26:16.444 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:16.444 15:44:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:16.444 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.444 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:16.444 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.444 15:44:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:16.444 15:44:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:16.702 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:16.702 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:16.702 15:44:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:16.969 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:16.969 15:44:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:16.969 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:16.969 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:16.969 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:26:16.969 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:16.969 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:26:16.969 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:16.969 15:44:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:16.969 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.969 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:16.969 15:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.969 15:44:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:16.969 15:44:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:16.969 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:16.969 15:44:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:16.969 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:16.969 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:16.969 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:26:16.969 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:16.969 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:26:16.969 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:16.969 15:44:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:16.969 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.969 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:16.969 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.969 15:44:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:16.969 15:44:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:17.232 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:17.232 rmmod nvme_tcp 00:26:17.232 rmmod nvme_fabrics 00:26:17.232 rmmod nvme_keyring 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1372533 ']' 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1372533 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 1372533 ']' 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 1372533 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1372533 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1372533' 00:26:17.232 killing process with pid 1372533 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 1372533 00:26:17.232 [2024-05-15 15:44:30.196600] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:17.232 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 1372533 00:26:17.798 15:44:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:17.798 15:44:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:17.798 15:44:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:17.798 15:44:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:17.798 15:44:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:17.798 15:44:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.798 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:17.798 15:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.707 15:44:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:19.707 00:26:19.707 real 1m0.590s 00:26:19.707 user 3m16.391s 00:26:19.707 sys 0m23.991s 00:26:19.707 15:44:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:19.707 15:44:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.707 ************************************ 00:26:19.707 END TEST nvmf_multiconnection 00:26:19.707 ************************************ 00:26:19.707 15:44:32 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:19.707 15:44:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:19.707 15:44:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:19.707 15:44:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:19.707 ************************************ 00:26:19.707 START TEST nvmf_initiator_timeout 00:26:19.707 ************************************ 00:26:19.707 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:19.965 * Looking for test storage... 00:26:19.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.965 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:26:19.966 15:44:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:22.513 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:22.513 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:22.513 Found net devices under 0000:09:00.0: cvl_0_0 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:22.513 Found net devices under 0000:09:00.1: cvl_0_1 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:22.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:22.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:26:22.513 00:26:22.513 --- 10.0.0.2 ping statistics --- 00:26:22.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.513 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:22.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:22.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:26:22.513 00:26:22.513 --- 10.0.0.1 ping statistics --- 00:26:22.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.513 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:22.513 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:22.514 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.514 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1381450 00:26:22.514 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:22.514 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1381450 00:26:22.514 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 1381450 ']' 00:26:22.514 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.514 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:22.514 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.514 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:22.514 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.514 [2024-05-15 15:44:35.412605] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:26:22.514 [2024-05-15 15:44:35.412686] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.514 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.514 [2024-05-15 15:44:35.460110] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:22.514 [2024-05-15 15:44:35.496403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:22.514 [2024-05-15 15:44:35.588194] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:22.514 [2024-05-15 15:44:35.588259] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:22.514 [2024-05-15 15:44:35.588276] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:22.514 [2024-05-15 15:44:35.588291] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:22.514 [2024-05-15 15:44:35.588303] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:22.514 [2024-05-15 15:44:35.588360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.514 [2024-05-15 15:44:35.588414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:22.514 [2024-05-15 15:44:35.588451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:22.514 [2024-05-15 15:44:35.588455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.773 Malloc0 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.773 Delay0 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.773 [2024-05-15 15:44:35.772156] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.773 [2024-05-15 15:44:35.800175] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:22.773 [2024-05-15 15:44:35.800496] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.773 15:44:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:23.723 15:44:36 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:23.723 15:44:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:26:23.723 15:44:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:23.723 15:44:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:23.723 15:44:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:26:25.625 15:44:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:25.625 15:44:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:25.625 15:44:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:26:25.625 15:44:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:25.626 15:44:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:25.626 15:44:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:26:25.626 15:44:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1381877 00:26:25.626 15:44:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:25.626 15:44:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:25.626 [global] 00:26:25.626 thread=1 00:26:25.626 invalidate=1 00:26:25.626 rw=write 00:26:25.626 time_based=1 00:26:25.626 runtime=60 00:26:25.626 ioengine=libaio 00:26:25.626 direct=1 00:26:25.626 bs=4096 00:26:25.626 iodepth=1 00:26:25.626 norandommap=0 00:26:25.626 numjobs=1 00:26:25.626 00:26:25.626 verify_dump=1 00:26:25.626 verify_backlog=512 00:26:25.626 verify_state_save=0 00:26:25.626 do_verify=1 00:26:25.626 verify=crc32c-intel 00:26:25.626 [job0] 00:26:25.626 filename=/dev/nvme0n1 00:26:25.626 Could not set queue depth (nvme0n1) 00:26:25.626 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:25.626 fio-3.35 00:26:25.626 Starting 1 thread 00:26:28.909 15:44:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:28.909 15:44:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.909 15:44:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.909 true 00:26:28.909 15:44:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.909 15:44:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:28.909 15:44:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.909 15:44:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.909 true 00:26:28.909 15:44:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.909 15:44:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:28.909 15:44:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.909 15:44:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.909 true 00:26:28.909 15:44:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.909 15:44:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:28.909 15:44:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.909 15:44:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.909 true 00:26:28.909 15:44:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.909 15:44:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:31.476 15:44:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:31.477 15:44:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.477 15:44:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:31.477 true 00:26:31.477 15:44:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.477 15:44:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:31.477 15:44:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.477 15:44:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:31.477 true 00:26:31.477 15:44:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.477 15:44:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:31.477 15:44:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.477 15:44:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:31.477 true 00:26:31.477 15:44:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.477 15:44:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:31.477 15:44:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.477 15:44:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:31.477 true 00:26:31.477 15:44:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.477 15:44:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:31.477 15:44:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1381877 00:27:27.723 00:27:27.723 job0: (groupid=0, jobs=1): err= 0: pid=1381946: Wed May 15 15:45:38 2024 00:27:27.723 read: IOPS=229, BW=919KiB/s (941kB/s)(53.9MiB/60041msec) 00:27:27.723 slat (usec): min=4, max=7867, avg=17.39, stdev=67.68 00:27:27.723 clat (usec): min=276, max=41241k, avg=4065.97, stdev=351162.66 00:27:27.724 lat (usec): min=281, max=41241k, avg=4083.36, stdev=351162.72 00:27:27.724 clat percentiles (usec): 00:27:27.724 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 318], 00:27:27.724 | 30.00th=[ 330], 40.00th=[ 347], 50.00th=[ 367], 60.00th=[ 379], 00:27:27.724 | 70.00th=[ 396], 80.00th=[ 424], 90.00th=[ 482], 95.00th=[ 506], 00:27:27.724 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:27:27.724 | 99.99th=[42730] 00:27:27.724 write: IOPS=230, BW=921KiB/s (943kB/s)(54.0MiB/60041msec); 0 zone resets 00:27:27.724 slat (nsec): min=5770, max=89755, avg=15118.09, stdev=10858.50 00:27:27.724 clat (usec): min=184, max=503, avg=244.37, stdev=46.67 00:27:27.724 lat (usec): min=190, max=544, avg=259.49, stdev=54.28 00:27:27.724 clat percentiles (usec): 00:27:27.724 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:27:27.724 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 237], 00:27:27.724 | 70.00th=[ 247], 80.00th=[ 277], 90.00th=[ 314], 95.00th=[ 351], 00:27:27.724 | 99.00th=[ 396], 99.50th=[ 408], 99.90th=[ 441], 99.95th=[ 457], 00:27:27.724 | 99.99th=[ 478] 00:27:27.724 bw ( KiB/s): min= 3080, max= 8192, per=100.00%, avg=5266.29, stdev=1730.94, samples=21 00:27:27.724 iops : min= 770, max= 2048, avg=1316.57, stdev=432.74, samples=21 00:27:27.724 lat (usec) : 250=36.01%, 500=61.11%, 750=2.01%, 1000=0.01% 00:27:27.724 lat (msec) : 2=0.01%, 50=0.85%, >=2000=0.01% 00:27:27.724 cpu : usr=0.40%, sys=0.75%, ctx=27620, majf=0, minf=2 00:27:27.724 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:27.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.724 issued rwts: total=13795,13824,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.724 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:27.724 00:27:27.724 Run status group 0 (all jobs): 00:27:27.724 READ: bw=919KiB/s (941kB/s), 919KiB/s-919KiB/s (941kB/s-941kB/s), io=53.9MiB (56.5MB), run=60041-60041msec 00:27:27.724 WRITE: bw=921KiB/s (943kB/s), 921KiB/s-921KiB/s (943kB/s-943kB/s), io=54.0MiB (56.6MB), run=60041-60041msec 00:27:27.724 00:27:27.724 Disk stats (read/write): 00:27:27.724 nvme0n1: ios=13890/13824, merge=0/0, ticks=14504/3233, in_queue=17737, util=99.82% 00:27:27.724 15:45:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:27.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:27.724 15:45:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:27.724 15:45:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:27:27.724 15:45:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:27.724 15:45:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:27.724 15:45:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:27.724 15:45:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:27.724 15:45:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:27:27.724 15:45:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:27.724 15:45:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:27.724 nvmf hotplug test: fio successful as expected 00:27:27.724 15:45:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:27.724 15:45:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.724 15:45:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:27.724 15:45:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.724 15:45:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:27.724 rmmod nvme_tcp 00:27:27.724 rmmod nvme_fabrics 00:27:27.724 rmmod nvme_keyring 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1381450 ']' 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1381450 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 1381450 ']' 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 1381450 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1381450 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1381450' 00:27:27.724 killing process with pid 1381450 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 1381450 00:27:27.724 [2024-05-15 15:45:39.101101] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 1381450 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:27.724 15:45:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.292 15:45:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:28.292 00:27:28.292 real 1m8.587s 00:27:28.292 user 4m10.788s 00:27:28.292 sys 0m7.560s 00:27:28.292 15:45:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:28.292 15:45:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:28.292 ************************************ 00:27:28.292 END TEST nvmf_initiator_timeout 00:27:28.292 ************************************ 00:27:28.550 15:45:41 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:27:28.550 15:45:41 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:27:28.550 15:45:41 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:27:28.550 15:45:41 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:27:28.550 15:45:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:31.090 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:31.090 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:31.090 Found net devices under 0000:09:00.0: cvl_0_0 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:31.090 Found net devices under 0000:09:00.1: cvl_0_1 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:27:31.090 15:45:43 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:31.090 15:45:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:31.090 15:45:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:31.090 15:45:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:31.090 ************************************ 00:27:31.090 START TEST nvmf_perf_adq 00:27:31.090 ************************************ 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:31.090 * Looking for test storage... 00:27:31.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:31.090 15:45:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:33.625 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:33.625 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:33.625 Found net devices under 0000:09:00.0: cvl_0_0 00:27:33.625 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.626 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:33.626 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.626 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:33.626 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.626 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:33.626 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:33.626 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.626 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:33.626 Found net devices under 0000:09:00.1: cvl_0_1 00:27:33.626 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.626 15:45:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:33.626 15:45:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:33.626 15:45:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:33.626 15:45:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:33.626 15:45:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:27:33.626 15:45:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:33.885 15:45:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:35.273 15:45:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:40.569 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:40.569 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:40.569 Found net devices under 0000:09:00.0: cvl_0_0 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:40.569 Found net devices under 0000:09:00.1: cvl_0_1 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:40.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:27:40.569 00:27:40.569 --- 10.0.0.2 ping statistics --- 00:27:40.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.569 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:40.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:27:40.569 00:27:40.569 --- 10.0.0.1 ping statistics --- 00:27:40.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.569 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1394776 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1394776 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 1394776 ']' 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:40.569 15:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:40.569 [2024-05-15 15:45:53.594239] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:27:40.569 [2024-05-15 15:45:53.594338] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.569 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.569 [2024-05-15 15:45:53.638718] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:40.840 [2024-05-15 15:45:53.672163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:40.840 [2024-05-15 15:45:53.759113] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:40.840 [2024-05-15 15:45:53.759171] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:40.840 [2024-05-15 15:45:53.759184] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:40.840 [2024-05-15 15:45:53.759195] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:40.840 [2024-05-15 15:45:53.759227] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:40.840 [2024-05-15 15:45:53.759305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.840 [2024-05-15 15:45:53.759351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:40.840 [2024-05-15 15:45:53.759408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:40.840 [2024-05-15 15:45:53.759410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.840 15:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:40.840 15:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:27:40.840 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:40.840 15:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:40.840 15:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:40.840 15:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:40.840 15:45:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:27:40.840 15:45:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:40.840 15:45:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:40.840 15:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.840 15:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:40.840 15:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.840 15:45:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:40.840 15:45:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:40.840 15:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.840 15:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:40.840 15:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.840 15:45:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:40.840 15:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.840 15:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:41.097 15:45:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.097 15:45:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:41.098 15:45:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.098 15:45:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:41.098 [2024-05-15 15:45:54.006932] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:41.098 15:45:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.098 15:45:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:41.098 15:45:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.098 15:45:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:41.098 Malloc1 00:27:41.098 15:45:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.098 15:45:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:41.098 15:45:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.098 15:45:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:41.098 15:45:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.098 15:45:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:41.098 15:45:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.098 15:45:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:41.098 15:45:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.098 15:45:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:41.098 15:45:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.098 15:45:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:41.098 [2024-05-15 15:45:54.057883] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:41.098 [2024-05-15 15:45:54.058231] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:41.098 15:45:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.098 15:45:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1394814 00:27:41.098 15:45:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:27:41.098 15:45:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:41.098 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.995 15:45:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:42.995 15:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.995 15:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:42.995 15:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.995 15:45:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:42.995 "tick_rate": 2700000000, 00:27:42.995 "poll_groups": [ 00:27:42.995 { 00:27:42.995 "name": "nvmf_tgt_poll_group_000", 00:27:42.995 "admin_qpairs": 1, 00:27:42.995 "io_qpairs": 1, 00:27:42.995 "current_admin_qpairs": 1, 00:27:42.995 "current_io_qpairs": 1, 00:27:42.995 "pending_bdev_io": 0, 00:27:42.995 "completed_nvme_io": 20613, 00:27:42.995 "transports": [ 00:27:42.995 { 00:27:42.995 "trtype": "TCP" 00:27:42.995 } 00:27:42.995 ] 00:27:42.995 }, 00:27:42.995 { 00:27:42.995 "name": "nvmf_tgt_poll_group_001", 00:27:42.995 "admin_qpairs": 0, 00:27:42.995 "io_qpairs": 1, 00:27:42.995 "current_admin_qpairs": 0, 00:27:42.995 "current_io_qpairs": 1, 00:27:42.995 "pending_bdev_io": 0, 00:27:42.995 "completed_nvme_io": 19870, 00:27:42.995 "transports": [ 00:27:42.995 { 00:27:42.995 "trtype": "TCP" 00:27:42.995 } 00:27:42.995 ] 00:27:42.995 }, 00:27:42.995 { 00:27:42.995 "name": "nvmf_tgt_poll_group_002", 00:27:42.995 "admin_qpairs": 0, 00:27:42.995 "io_qpairs": 1, 00:27:42.995 "current_admin_qpairs": 0, 00:27:42.995 "current_io_qpairs": 1, 00:27:42.995 "pending_bdev_io": 0, 00:27:42.995 "completed_nvme_io": 17944, 00:27:42.995 "transports": [ 00:27:42.995 { 00:27:42.995 "trtype": "TCP" 00:27:42.995 } 00:27:42.995 ] 00:27:42.995 }, 00:27:42.995 { 00:27:42.995 "name": "nvmf_tgt_poll_group_003", 00:27:42.995 "admin_qpairs": 0, 00:27:42.995 "io_qpairs": 1, 00:27:42.995 "current_admin_qpairs": 0, 00:27:42.995 "current_io_qpairs": 1, 00:27:42.995 "pending_bdev_io": 0, 00:27:42.995 "completed_nvme_io": 20436, 00:27:42.995 "transports": [ 00:27:42.995 { 00:27:42.995 "trtype": "TCP" 00:27:42.995 } 00:27:42.995 ] 00:27:42.995 } 00:27:42.995 ] 00:27:42.995 }' 00:27:42.995 15:45:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:42.995 15:45:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:43.253 15:45:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:43.253 15:45:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:43.253 15:45:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1394814 00:27:51.357 Initializing NVMe Controllers 00:27:51.357 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:51.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:51.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:51.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:51.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:51.357 Initialization complete. Launching workers. 00:27:51.357 ======================================================== 00:27:51.357 Latency(us) 00:27:51.357 Device Information : IOPS MiB/s Average min max 00:27:51.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10649.99 41.60 6008.86 2741.11 9980.00 00:27:51.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10619.29 41.48 6026.69 2041.31 47800.36 00:27:51.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9473.91 37.01 6757.75 2010.88 10362.13 00:27:51.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10844.28 42.36 5903.82 2325.35 8729.43 00:27:51.357 ======================================================== 00:27:51.357 Total : 41587.47 162.45 6156.62 2010.88 47800.36 00:27:51.357 00:27:51.357 15:46:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:51.357 15:46:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:51.357 15:46:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:51.357 15:46:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:51.357 15:46:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:51.357 15:46:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:51.357 15:46:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:51.357 rmmod nvme_tcp 00:27:51.357 rmmod nvme_fabrics 00:27:51.357 rmmod nvme_keyring 00:27:51.357 15:46:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:51.357 15:46:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:51.357 15:46:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:51.357 15:46:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1394776 ']' 00:27:51.357 15:46:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1394776 00:27:51.357 15:46:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 1394776 ']' 00:27:51.357 15:46:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 1394776 00:27:51.357 15:46:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:51.357 15:46:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:51.357 15:46:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1394776 00:27:51.357 15:46:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:51.357 15:46:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:51.357 15:46:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1394776' 00:27:51.357 killing process with pid 1394776 00:27:51.357 15:46:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 1394776 00:27:51.357 [2024-05-15 15:46:04.281823] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:51.357 15:46:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 1394776 00:27:51.617 15:46:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:51.617 15:46:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:51.617 15:46:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:51.617 15:46:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:51.617 15:46:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:51.617 15:46:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.617 15:46:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:51.617 15:46:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.518 15:46:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:53.518 15:46:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:53.518 15:46:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:54.083 15:46:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:55.982 15:46:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:01.250 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:01.250 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:01.250 Found net devices under 0000:09:00.0: cvl_0_0 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:01.250 Found net devices under 0000:09:00.1: cvl_0_1 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:01.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:01.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:28:01.250 00:28:01.250 --- 10.0.0.2 ping statistics --- 00:28:01.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.250 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:01.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:01.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:28:01.250 00:28:01.250 --- 10.0.0.1 ping statistics --- 00:28:01.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.250 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:28:01.250 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:01.251 net.core.busy_poll = 1 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:01.251 net.core.busy_read = 1 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1397337 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1397337 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 1397337 ']' 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:01.251 15:46:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.251 [2024-05-15 15:46:13.992383] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:28:01.251 [2024-05-15 15:46:13.992461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:01.251 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.251 [2024-05-15 15:46:14.040016] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:01.251 [2024-05-15 15:46:14.070972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:01.251 [2024-05-15 15:46:14.152752] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:01.251 [2024-05-15 15:46:14.152804] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:01.251 [2024-05-15 15:46:14.152827] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:01.251 [2024-05-15 15:46:14.152837] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:01.251 [2024-05-15 15:46:14.152847] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:01.251 [2024-05-15 15:46:14.152928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.251 [2024-05-15 15:46:14.152951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:01.251 [2024-05-15 15:46:14.153006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:01.251 [2024-05-15 15:46:14.153008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.251 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:01.251 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:28:01.251 15:46:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:01.251 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:01.251 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.251 15:46:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:01.251 15:46:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:28:01.251 15:46:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:01.251 15:46:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:01.251 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.251 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.251 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.251 15:46:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:01.251 15:46:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:01.251 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.251 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.251 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.251 15:46:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:01.251 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.251 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.508 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.508 15:46:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:01.508 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.508 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.508 [2024-05-15 15:46:14.383021] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.508 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.508 15:46:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:01.508 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.508 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.508 Malloc1 00:28:01.508 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.508 15:46:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:01.508 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.508 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.508 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.508 15:46:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:01.508 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.508 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.508 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.508 15:46:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:01.508 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.508 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.508 [2024-05-15 15:46:14.436076] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:01.508 [2024-05-15 15:46:14.436411] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.508 15:46:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.508 15:46:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1397446 00:28:01.508 15:46:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:28:01.508 15:46:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:01.508 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.405 15:46:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:28:03.405 15:46:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.405 15:46:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:03.405 15:46:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.405 15:46:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:28:03.405 "tick_rate": 2700000000, 00:28:03.405 "poll_groups": [ 00:28:03.405 { 00:28:03.405 "name": "nvmf_tgt_poll_group_000", 00:28:03.405 "admin_qpairs": 1, 00:28:03.405 "io_qpairs": 1, 00:28:03.405 "current_admin_qpairs": 1, 00:28:03.405 "current_io_qpairs": 1, 00:28:03.405 "pending_bdev_io": 0, 00:28:03.405 "completed_nvme_io": 26311, 00:28:03.405 "transports": [ 00:28:03.405 { 00:28:03.405 "trtype": "TCP" 00:28:03.405 } 00:28:03.405 ] 00:28:03.405 }, 00:28:03.405 { 00:28:03.405 "name": "nvmf_tgt_poll_group_001", 00:28:03.405 "admin_qpairs": 0, 00:28:03.405 "io_qpairs": 3, 00:28:03.405 "current_admin_qpairs": 0, 00:28:03.405 "current_io_qpairs": 3, 00:28:03.405 "pending_bdev_io": 0, 00:28:03.405 "completed_nvme_io": 21842, 00:28:03.405 "transports": [ 00:28:03.405 { 00:28:03.405 "trtype": "TCP" 00:28:03.405 } 00:28:03.405 ] 00:28:03.405 }, 00:28:03.405 { 00:28:03.405 "name": "nvmf_tgt_poll_group_002", 00:28:03.405 "admin_qpairs": 0, 00:28:03.405 "io_qpairs": 0, 00:28:03.405 "current_admin_qpairs": 0, 00:28:03.405 "current_io_qpairs": 0, 00:28:03.405 "pending_bdev_io": 0, 00:28:03.405 "completed_nvme_io": 0, 00:28:03.405 "transports": [ 00:28:03.405 { 00:28:03.405 "trtype": "TCP" 00:28:03.405 } 00:28:03.405 ] 00:28:03.405 }, 00:28:03.405 { 00:28:03.405 "name": "nvmf_tgt_poll_group_003", 00:28:03.405 "admin_qpairs": 0, 00:28:03.405 "io_qpairs": 0, 00:28:03.405 "current_admin_qpairs": 0, 00:28:03.405 "current_io_qpairs": 0, 00:28:03.405 "pending_bdev_io": 0, 00:28:03.405 "completed_nvme_io": 0, 00:28:03.405 "transports": [ 00:28:03.405 { 00:28:03.405 "trtype": "TCP" 00:28:03.405 } 00:28:03.405 ] 00:28:03.405 } 00:28:03.405 ] 00:28:03.405 }' 00:28:03.405 15:46:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:03.405 15:46:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:28:03.405 15:46:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:28:03.405 15:46:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:28:03.405 15:46:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1397446 00:28:11.559 Initializing NVMe Controllers 00:28:11.559 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:11.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:11.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:11.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:11.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:11.559 Initialization complete. Launching workers. 00:28:11.559 ======================================================== 00:28:11.559 Latency(us) 00:28:11.559 Device Information : IOPS MiB/s Average min max 00:28:11.559 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 3856.00 15.06 16602.70 1996.43 66028.63 00:28:11.559 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 3784.90 14.78 16912.84 2672.22 67896.72 00:28:11.559 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14146.60 55.26 4524.06 1456.89 46175.07 00:28:11.559 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 3802.10 14.85 16836.16 2783.82 68655.63 00:28:11.559 ======================================================== 00:28:11.559 Total : 25589.59 99.96 10005.87 1456.89 68655.63 00:28:11.559 00:28:11.559 15:46:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:28:11.559 15:46:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:11.559 15:46:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:28:11.559 15:46:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:11.559 15:46:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:28:11.559 15:46:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:11.559 15:46:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:11.559 rmmod nvme_tcp 00:28:11.559 rmmod nvme_fabrics 00:28:11.559 rmmod nvme_keyring 00:28:11.559 15:46:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:11.559 15:46:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:28:11.559 15:46:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:28:11.559 15:46:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1397337 ']' 00:28:11.559 15:46:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1397337 00:28:11.559 15:46:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 1397337 ']' 00:28:11.559 15:46:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 1397337 00:28:11.559 15:46:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:28:11.559 15:46:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:11.559 15:46:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1397337 00:28:11.817 15:46:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:11.817 15:46:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:11.817 15:46:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1397337' 00:28:11.817 killing process with pid 1397337 00:28:11.817 15:46:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 1397337 00:28:11.817 [2024-05-15 15:46:24.683897] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:11.817 15:46:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 1397337 00:28:12.076 15:46:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:12.076 15:46:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:12.076 15:46:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:12.076 15:46:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:12.076 15:46:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:12.076 15:46:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.076 15:46:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:12.076 15:46:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.360 15:46:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:15.360 15:46:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:15.360 00:28:15.360 real 0m44.101s 00:28:15.360 user 2m34.344s 00:28:15.360 sys 0m11.793s 00:28:15.360 15:46:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:15.360 15:46:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.360 ************************************ 00:28:15.360 END TEST nvmf_perf_adq 00:28:15.360 ************************************ 00:28:15.360 15:46:27 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:15.360 15:46:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:15.360 15:46:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:15.360 15:46:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:15.360 ************************************ 00:28:15.360 START TEST nvmf_shutdown 00:28:15.360 ************************************ 00:28:15.360 15:46:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:15.360 * Looking for test storage... 00:28:15.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:15.360 15:46:28 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:15.360 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:15.360 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.360 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.360 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.360 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.360 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.360 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.360 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.360 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.360 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:15.360 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:15.360 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:15.360 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:15.361 ************************************ 00:28:15.361 START TEST nvmf_shutdown_tc1 00:28:15.361 ************************************ 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:15.361 15:46:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:17.889 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:17.889 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:17.889 Found net devices under 0000:09:00.0: cvl_0_0 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.889 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:17.890 Found net devices under 0000:09:00.1: cvl_0_1 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:17.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:17.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:28:17.890 00:28:17.890 --- 10.0.0.2 ping statistics --- 00:28:17.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.890 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:17.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:17.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:28:17.890 00:28:17.890 --- 10.0.0.1 ping statistics --- 00:28:17.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.890 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1401030 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1401030 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 1401030 ']' 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:17.890 15:46:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:17.890 [2024-05-15 15:46:30.727707] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:28:17.890 [2024-05-15 15:46:30.727779] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:17.890 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.890 [2024-05-15 15:46:30.774051] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:17.890 [2024-05-15 15:46:30.804015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:17.890 [2024-05-15 15:46:30.886575] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:17.890 [2024-05-15 15:46:30.886624] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:17.890 [2024-05-15 15:46:30.886645] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:17.890 [2024-05-15 15:46:30.886656] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:17.890 [2024-05-15 15:46:30.886666] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:17.890 [2024-05-15 15:46:30.886728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:17.890 [2024-05-15 15:46:30.886785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:17.890 [2024-05-15 15:46:30.886860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:17.890 [2024-05-15 15:46:30.886863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:18.148 [2024-05-15 15:46:31.029795] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.148 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:18.148 Malloc1 00:28:18.148 [2024-05-15 15:46:31.105346] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:18.148 [2024-05-15 15:46:31.105684] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:18.148 Malloc2 00:28:18.148 Malloc3 00:28:18.148 Malloc4 00:28:18.406 Malloc5 00:28:18.406 Malloc6 00:28:18.406 Malloc7 00:28:18.406 Malloc8 00:28:18.406 Malloc9 00:28:18.664 Malloc10 00:28:18.664 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.664 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:18.664 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:18.664 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:18.664 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1401205 00:28:18.664 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1401205 /var/tmp/bdevperf.sock 00:28:18.664 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 1401205 ']' 00:28:18.664 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:18.664 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:18.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.665 { 00:28:18.665 "params": { 00:28:18.665 "name": "Nvme$subsystem", 00:28:18.665 "trtype": "$TEST_TRANSPORT", 00:28:18.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.665 "adrfam": "ipv4", 00:28:18.665 "trsvcid": "$NVMF_PORT", 00:28:18.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.665 "hdgst": ${hdgst:-false}, 00:28:18.665 "ddgst": ${ddgst:-false} 00:28:18.665 }, 00:28:18.665 "method": "bdev_nvme_attach_controller" 00:28:18.665 } 00:28:18.665 EOF 00:28:18.665 )") 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.665 { 00:28:18.665 "params": { 00:28:18.665 "name": "Nvme$subsystem", 00:28:18.665 "trtype": "$TEST_TRANSPORT", 00:28:18.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.665 "adrfam": "ipv4", 00:28:18.665 "trsvcid": "$NVMF_PORT", 00:28:18.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.665 "hdgst": ${hdgst:-false}, 00:28:18.665 "ddgst": ${ddgst:-false} 00:28:18.665 }, 00:28:18.665 "method": "bdev_nvme_attach_controller" 00:28:18.665 } 00:28:18.665 EOF 00:28:18.665 )") 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.665 { 00:28:18.665 "params": { 00:28:18.665 "name": "Nvme$subsystem", 00:28:18.665 "trtype": "$TEST_TRANSPORT", 00:28:18.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.665 "adrfam": "ipv4", 00:28:18.665 "trsvcid": "$NVMF_PORT", 00:28:18.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.665 "hdgst": ${hdgst:-false}, 00:28:18.665 "ddgst": ${ddgst:-false} 00:28:18.665 }, 00:28:18.665 "method": "bdev_nvme_attach_controller" 00:28:18.665 } 00:28:18.665 EOF 00:28:18.665 )") 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.665 { 00:28:18.665 "params": { 00:28:18.665 "name": "Nvme$subsystem", 00:28:18.665 "trtype": "$TEST_TRANSPORT", 00:28:18.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.665 "adrfam": "ipv4", 00:28:18.665 "trsvcid": "$NVMF_PORT", 00:28:18.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.665 "hdgst": ${hdgst:-false}, 00:28:18.665 "ddgst": ${ddgst:-false} 00:28:18.665 }, 00:28:18.665 "method": "bdev_nvme_attach_controller" 00:28:18.665 } 00:28:18.665 EOF 00:28:18.665 )") 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.665 { 00:28:18.665 "params": { 00:28:18.665 "name": "Nvme$subsystem", 00:28:18.665 "trtype": "$TEST_TRANSPORT", 00:28:18.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.665 "adrfam": "ipv4", 00:28:18.665 "trsvcid": "$NVMF_PORT", 00:28:18.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.665 "hdgst": ${hdgst:-false}, 00:28:18.665 "ddgst": ${ddgst:-false} 00:28:18.665 }, 00:28:18.665 "method": "bdev_nvme_attach_controller" 00:28:18.665 } 00:28:18.665 EOF 00:28:18.665 )") 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.665 { 00:28:18.665 "params": { 00:28:18.665 "name": "Nvme$subsystem", 00:28:18.665 "trtype": "$TEST_TRANSPORT", 00:28:18.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.665 "adrfam": "ipv4", 00:28:18.665 "trsvcid": "$NVMF_PORT", 00:28:18.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.665 "hdgst": ${hdgst:-false}, 00:28:18.665 "ddgst": ${ddgst:-false} 00:28:18.665 }, 00:28:18.665 "method": "bdev_nvme_attach_controller" 00:28:18.665 } 00:28:18.665 EOF 00:28:18.665 )") 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.665 { 00:28:18.665 "params": { 00:28:18.665 "name": "Nvme$subsystem", 00:28:18.665 "trtype": "$TEST_TRANSPORT", 00:28:18.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.665 "adrfam": "ipv4", 00:28:18.665 "trsvcid": "$NVMF_PORT", 00:28:18.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.665 "hdgst": ${hdgst:-false}, 00:28:18.665 "ddgst": ${ddgst:-false} 00:28:18.665 }, 00:28:18.665 "method": "bdev_nvme_attach_controller" 00:28:18.665 } 00:28:18.665 EOF 00:28:18.665 )") 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.665 { 00:28:18.665 "params": { 00:28:18.665 "name": "Nvme$subsystem", 00:28:18.665 "trtype": "$TEST_TRANSPORT", 00:28:18.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.665 "adrfam": "ipv4", 00:28:18.665 "trsvcid": "$NVMF_PORT", 00:28:18.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.665 "hdgst": ${hdgst:-false}, 00:28:18.665 "ddgst": ${ddgst:-false} 00:28:18.665 }, 00:28:18.665 "method": "bdev_nvme_attach_controller" 00:28:18.665 } 00:28:18.665 EOF 00:28:18.665 )") 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.665 { 00:28:18.665 "params": { 00:28:18.665 "name": "Nvme$subsystem", 00:28:18.665 "trtype": "$TEST_TRANSPORT", 00:28:18.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.665 "adrfam": "ipv4", 00:28:18.665 "trsvcid": "$NVMF_PORT", 00:28:18.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.665 "hdgst": ${hdgst:-false}, 00:28:18.665 "ddgst": ${ddgst:-false} 00:28:18.665 }, 00:28:18.665 "method": "bdev_nvme_attach_controller" 00:28:18.665 } 00:28:18.665 EOF 00:28:18.665 )") 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.665 { 00:28:18.665 "params": { 00:28:18.665 "name": "Nvme$subsystem", 00:28:18.665 "trtype": "$TEST_TRANSPORT", 00:28:18.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.665 "adrfam": "ipv4", 00:28:18.665 "trsvcid": "$NVMF_PORT", 00:28:18.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.665 "hdgst": ${hdgst:-false}, 00:28:18.665 "ddgst": ${ddgst:-false} 00:28:18.665 }, 00:28:18.665 "method": "bdev_nvme_attach_controller" 00:28:18.665 } 00:28:18.665 EOF 00:28:18.665 )") 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:18.665 15:46:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:18.665 "params": { 00:28:18.665 "name": "Nvme1", 00:28:18.665 "trtype": "tcp", 00:28:18.665 "traddr": "10.0.0.2", 00:28:18.665 "adrfam": "ipv4", 00:28:18.665 "trsvcid": "4420", 00:28:18.665 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:18.665 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:18.665 "hdgst": false, 00:28:18.665 "ddgst": false 00:28:18.665 }, 00:28:18.665 "method": "bdev_nvme_attach_controller" 00:28:18.665 },{ 00:28:18.666 "params": { 00:28:18.666 "name": "Nvme2", 00:28:18.666 "trtype": "tcp", 00:28:18.666 "traddr": "10.0.0.2", 00:28:18.666 "adrfam": "ipv4", 00:28:18.666 "trsvcid": "4420", 00:28:18.666 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:18.666 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:18.666 "hdgst": false, 00:28:18.666 "ddgst": false 00:28:18.666 }, 00:28:18.666 "method": "bdev_nvme_attach_controller" 00:28:18.666 },{ 00:28:18.666 "params": { 00:28:18.666 "name": "Nvme3", 00:28:18.666 "trtype": "tcp", 00:28:18.666 "traddr": "10.0.0.2", 00:28:18.666 "adrfam": "ipv4", 00:28:18.666 "trsvcid": "4420", 00:28:18.666 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:18.666 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:18.666 "hdgst": false, 00:28:18.666 "ddgst": false 00:28:18.666 }, 00:28:18.666 "method": "bdev_nvme_attach_controller" 00:28:18.666 },{ 00:28:18.666 "params": { 00:28:18.666 "name": "Nvme4", 00:28:18.666 "trtype": "tcp", 00:28:18.666 "traddr": "10.0.0.2", 00:28:18.666 "adrfam": "ipv4", 00:28:18.666 "trsvcid": "4420", 00:28:18.666 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:18.666 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:18.666 "hdgst": false, 00:28:18.666 "ddgst": false 00:28:18.666 }, 00:28:18.666 "method": "bdev_nvme_attach_controller" 00:28:18.666 },{ 00:28:18.666 "params": { 00:28:18.666 "name": "Nvme5", 00:28:18.666 "trtype": "tcp", 00:28:18.666 "traddr": "10.0.0.2", 00:28:18.666 "adrfam": "ipv4", 00:28:18.666 "trsvcid": "4420", 00:28:18.666 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:18.666 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:18.666 "hdgst": false, 00:28:18.666 "ddgst": false 00:28:18.666 }, 00:28:18.666 "method": "bdev_nvme_attach_controller" 00:28:18.666 },{ 00:28:18.666 "params": { 00:28:18.666 "name": "Nvme6", 00:28:18.666 "trtype": "tcp", 00:28:18.666 "traddr": "10.0.0.2", 00:28:18.666 "adrfam": "ipv4", 00:28:18.666 "trsvcid": "4420", 00:28:18.666 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:18.666 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:18.666 "hdgst": false, 00:28:18.666 "ddgst": false 00:28:18.666 }, 00:28:18.666 "method": "bdev_nvme_attach_controller" 00:28:18.666 },{ 00:28:18.666 "params": { 00:28:18.666 "name": "Nvme7", 00:28:18.666 "trtype": "tcp", 00:28:18.666 "traddr": "10.0.0.2", 00:28:18.666 "adrfam": "ipv4", 00:28:18.666 "trsvcid": "4420", 00:28:18.666 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:18.666 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:18.666 "hdgst": false, 00:28:18.666 "ddgst": false 00:28:18.666 }, 00:28:18.666 "method": "bdev_nvme_attach_controller" 00:28:18.666 },{ 00:28:18.666 "params": { 00:28:18.666 "name": "Nvme8", 00:28:18.666 "trtype": "tcp", 00:28:18.666 "traddr": "10.0.0.2", 00:28:18.666 "adrfam": "ipv4", 00:28:18.666 "trsvcid": "4420", 00:28:18.666 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:18.666 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:18.666 "hdgst": false, 00:28:18.666 "ddgst": false 00:28:18.666 }, 00:28:18.666 "method": "bdev_nvme_attach_controller" 00:28:18.666 },{ 00:28:18.666 "params": { 00:28:18.666 "name": "Nvme9", 00:28:18.666 "trtype": "tcp", 00:28:18.666 "traddr": "10.0.0.2", 00:28:18.666 "adrfam": "ipv4", 00:28:18.666 "trsvcid": "4420", 00:28:18.666 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:18.666 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:18.666 "hdgst": false, 00:28:18.666 "ddgst": false 00:28:18.666 }, 00:28:18.666 "method": "bdev_nvme_attach_controller" 00:28:18.666 },{ 00:28:18.666 "params": { 00:28:18.666 "name": "Nvme10", 00:28:18.666 "trtype": "tcp", 00:28:18.666 "traddr": "10.0.0.2", 00:28:18.666 "adrfam": "ipv4", 00:28:18.666 "trsvcid": "4420", 00:28:18.666 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:18.666 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:18.666 "hdgst": false, 00:28:18.666 "ddgst": false 00:28:18.666 }, 00:28:18.666 "method": "bdev_nvme_attach_controller" 00:28:18.666 }' 00:28:18.666 [2024-05-15 15:46:31.610972] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:28:18.666 [2024-05-15 15:46:31.611053] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:18.666 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.666 [2024-05-15 15:46:31.650830] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:18.666 [2024-05-15 15:46:31.684447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.923 [2024-05-15 15:46:31.767996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.820 15:46:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:20.820 15:46:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:28:20.820 15:46:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:20.820 15:46:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.820 15:46:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:20.820 15:46:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.820 15:46:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1401205 00:28:20.820 15:46:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:28:20.820 15:46:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:28:21.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1401205 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1401030 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:21.752 { 00:28:21.752 "params": { 00:28:21.752 "name": "Nvme$subsystem", 00:28:21.752 "trtype": "$TEST_TRANSPORT", 00:28:21.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.752 "adrfam": "ipv4", 00:28:21.752 "trsvcid": "$NVMF_PORT", 00:28:21.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.752 "hdgst": ${hdgst:-false}, 00:28:21.752 "ddgst": ${ddgst:-false} 00:28:21.752 }, 00:28:21.752 "method": "bdev_nvme_attach_controller" 00:28:21.752 } 00:28:21.752 EOF 00:28:21.752 )") 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:21.752 { 00:28:21.752 "params": { 00:28:21.752 "name": "Nvme$subsystem", 00:28:21.752 "trtype": "$TEST_TRANSPORT", 00:28:21.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.752 "adrfam": "ipv4", 00:28:21.752 "trsvcid": "$NVMF_PORT", 00:28:21.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.752 "hdgst": ${hdgst:-false}, 00:28:21.752 "ddgst": ${ddgst:-false} 00:28:21.752 }, 00:28:21.752 "method": "bdev_nvme_attach_controller" 00:28:21.752 } 00:28:21.752 EOF 00:28:21.752 )") 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:21.752 { 00:28:21.752 "params": { 00:28:21.752 "name": "Nvme$subsystem", 00:28:21.752 "trtype": "$TEST_TRANSPORT", 00:28:21.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.752 "adrfam": "ipv4", 00:28:21.752 "trsvcid": "$NVMF_PORT", 00:28:21.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.752 "hdgst": ${hdgst:-false}, 00:28:21.752 "ddgst": ${ddgst:-false} 00:28:21.752 }, 00:28:21.752 "method": "bdev_nvme_attach_controller" 00:28:21.752 } 00:28:21.752 EOF 00:28:21.752 )") 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:21.752 { 00:28:21.752 "params": { 00:28:21.752 "name": "Nvme$subsystem", 00:28:21.752 "trtype": "$TEST_TRANSPORT", 00:28:21.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.752 "adrfam": "ipv4", 00:28:21.752 "trsvcid": "$NVMF_PORT", 00:28:21.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.752 "hdgst": ${hdgst:-false}, 00:28:21.752 "ddgst": ${ddgst:-false} 00:28:21.752 }, 00:28:21.752 "method": "bdev_nvme_attach_controller" 00:28:21.752 } 00:28:21.752 EOF 00:28:21.752 )") 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:21.752 { 00:28:21.752 "params": { 00:28:21.752 "name": "Nvme$subsystem", 00:28:21.752 "trtype": "$TEST_TRANSPORT", 00:28:21.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.752 "adrfam": "ipv4", 00:28:21.752 "trsvcid": "$NVMF_PORT", 00:28:21.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.752 "hdgst": ${hdgst:-false}, 00:28:21.752 "ddgst": ${ddgst:-false} 00:28:21.752 }, 00:28:21.752 "method": "bdev_nvme_attach_controller" 00:28:21.752 } 00:28:21.752 EOF 00:28:21.752 )") 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:21.752 { 00:28:21.752 "params": { 00:28:21.752 "name": "Nvme$subsystem", 00:28:21.752 "trtype": "$TEST_TRANSPORT", 00:28:21.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.752 "adrfam": "ipv4", 00:28:21.752 "trsvcid": "$NVMF_PORT", 00:28:21.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.752 "hdgst": ${hdgst:-false}, 00:28:21.752 "ddgst": ${ddgst:-false} 00:28:21.752 }, 00:28:21.752 "method": "bdev_nvme_attach_controller" 00:28:21.752 } 00:28:21.752 EOF 00:28:21.752 )") 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:21.752 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:21.752 { 00:28:21.752 "params": { 00:28:21.753 "name": "Nvme$subsystem", 00:28:21.753 "trtype": "$TEST_TRANSPORT", 00:28:21.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.753 "adrfam": "ipv4", 00:28:21.753 "trsvcid": "$NVMF_PORT", 00:28:21.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.753 "hdgst": ${hdgst:-false}, 00:28:21.753 "ddgst": ${ddgst:-false} 00:28:21.753 }, 00:28:21.753 "method": "bdev_nvme_attach_controller" 00:28:21.753 } 00:28:21.753 EOF 00:28:21.753 )") 00:28:21.753 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:21.753 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:21.753 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:21.753 { 00:28:21.753 "params": { 00:28:21.753 "name": "Nvme$subsystem", 00:28:21.753 "trtype": "$TEST_TRANSPORT", 00:28:21.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.753 "adrfam": "ipv4", 00:28:21.753 "trsvcid": "$NVMF_PORT", 00:28:21.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.753 "hdgst": ${hdgst:-false}, 00:28:21.753 "ddgst": ${ddgst:-false} 00:28:21.753 }, 00:28:21.753 "method": "bdev_nvme_attach_controller" 00:28:21.753 } 00:28:21.753 EOF 00:28:21.753 )") 00:28:21.753 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:21.753 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:21.753 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:21.753 { 00:28:21.753 "params": { 00:28:21.753 "name": "Nvme$subsystem", 00:28:21.753 "trtype": "$TEST_TRANSPORT", 00:28:21.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.753 "adrfam": "ipv4", 00:28:21.753 "trsvcid": "$NVMF_PORT", 00:28:21.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.753 "hdgst": ${hdgst:-false}, 00:28:21.753 "ddgst": ${ddgst:-false} 00:28:21.753 }, 00:28:21.753 "method": "bdev_nvme_attach_controller" 00:28:21.753 } 00:28:21.753 EOF 00:28:21.753 )") 00:28:21.753 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:21.753 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:21.753 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:21.753 { 00:28:21.753 "params": { 00:28:21.753 "name": "Nvme$subsystem", 00:28:21.753 "trtype": "$TEST_TRANSPORT", 00:28:21.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.753 "adrfam": "ipv4", 00:28:21.753 "trsvcid": "$NVMF_PORT", 00:28:21.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.753 "hdgst": ${hdgst:-false}, 00:28:21.753 "ddgst": ${ddgst:-false} 00:28:21.753 }, 00:28:21.753 "method": "bdev_nvme_attach_controller" 00:28:21.753 } 00:28:21.753 EOF 00:28:21.753 )") 00:28:21.753 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:21.753 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:21.753 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:21.753 15:46:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:21.753 "params": { 00:28:21.753 "name": "Nvme1", 00:28:21.753 "trtype": "tcp", 00:28:21.753 "traddr": "10.0.0.2", 00:28:21.753 "adrfam": "ipv4", 00:28:21.753 "trsvcid": "4420", 00:28:21.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:21.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:21.753 "hdgst": false, 00:28:21.753 "ddgst": false 00:28:21.753 }, 00:28:21.753 "method": "bdev_nvme_attach_controller" 00:28:21.753 },{ 00:28:21.753 "params": { 00:28:21.753 "name": "Nvme2", 00:28:21.753 "trtype": "tcp", 00:28:21.753 "traddr": "10.0.0.2", 00:28:21.753 "adrfam": "ipv4", 00:28:21.753 "trsvcid": "4420", 00:28:21.753 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:21.753 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:21.753 "hdgst": false, 00:28:21.753 "ddgst": false 00:28:21.753 }, 00:28:21.753 "method": "bdev_nvme_attach_controller" 00:28:21.753 },{ 00:28:21.753 "params": { 00:28:21.753 "name": "Nvme3", 00:28:21.753 "trtype": "tcp", 00:28:21.753 "traddr": "10.0.0.2", 00:28:21.753 "adrfam": "ipv4", 00:28:21.753 "trsvcid": "4420", 00:28:21.753 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:21.753 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:21.753 "hdgst": false, 00:28:21.753 "ddgst": false 00:28:21.753 }, 00:28:21.753 "method": "bdev_nvme_attach_controller" 00:28:21.753 },{ 00:28:21.753 "params": { 00:28:21.753 "name": "Nvme4", 00:28:21.753 "trtype": "tcp", 00:28:21.753 "traddr": "10.0.0.2", 00:28:21.753 "adrfam": "ipv4", 00:28:21.753 "trsvcid": "4420", 00:28:21.753 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:21.753 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:21.753 "hdgst": false, 00:28:21.753 "ddgst": false 00:28:21.753 }, 00:28:21.753 "method": "bdev_nvme_attach_controller" 00:28:21.753 },{ 00:28:21.753 "params": { 00:28:21.753 "name": "Nvme5", 00:28:21.753 "trtype": "tcp", 00:28:21.753 "traddr": "10.0.0.2", 00:28:21.753 "adrfam": "ipv4", 00:28:21.753 "trsvcid": "4420", 00:28:21.753 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:21.753 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:21.753 "hdgst": false, 00:28:21.753 "ddgst": false 00:28:21.753 }, 00:28:21.753 "method": "bdev_nvme_attach_controller" 00:28:21.753 },{ 00:28:21.753 "params": { 00:28:21.753 "name": "Nvme6", 00:28:21.753 "trtype": "tcp", 00:28:21.753 "traddr": "10.0.0.2", 00:28:21.753 "adrfam": "ipv4", 00:28:21.753 "trsvcid": "4420", 00:28:21.753 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:21.753 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:21.753 "hdgst": false, 00:28:21.753 "ddgst": false 00:28:21.753 }, 00:28:21.753 "method": "bdev_nvme_attach_controller" 00:28:21.753 },{ 00:28:21.753 "params": { 00:28:21.753 "name": "Nvme7", 00:28:21.753 "trtype": "tcp", 00:28:21.753 "traddr": "10.0.0.2", 00:28:21.753 "adrfam": "ipv4", 00:28:21.753 "trsvcid": "4420", 00:28:21.753 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:21.753 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:21.753 "hdgst": false, 00:28:21.753 "ddgst": false 00:28:21.753 }, 00:28:21.753 "method": "bdev_nvme_attach_controller" 00:28:21.753 },{ 00:28:21.753 "params": { 00:28:21.753 "name": "Nvme8", 00:28:21.753 "trtype": "tcp", 00:28:21.753 "traddr": "10.0.0.2", 00:28:21.753 "adrfam": "ipv4", 00:28:21.753 "trsvcid": "4420", 00:28:21.753 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:21.753 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:21.753 "hdgst": false, 00:28:21.753 "ddgst": false 00:28:21.753 }, 00:28:21.753 "method": "bdev_nvme_attach_controller" 00:28:21.753 },{ 00:28:21.753 "params": { 00:28:21.753 "name": "Nvme9", 00:28:21.753 "trtype": "tcp", 00:28:21.753 "traddr": "10.0.0.2", 00:28:21.753 "adrfam": "ipv4", 00:28:21.753 "trsvcid": "4420", 00:28:21.753 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:21.753 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:21.753 "hdgst": false, 00:28:21.753 "ddgst": false 00:28:21.753 }, 00:28:21.753 "method": "bdev_nvme_attach_controller" 00:28:21.753 },{ 00:28:21.753 "params": { 00:28:21.753 "name": "Nvme10", 00:28:21.753 "trtype": "tcp", 00:28:21.753 "traddr": "10.0.0.2", 00:28:21.753 "adrfam": "ipv4", 00:28:21.753 "trsvcid": "4420", 00:28:21.753 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:21.753 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:21.753 "hdgst": false, 00:28:21.753 "ddgst": false 00:28:21.753 }, 00:28:21.753 "method": "bdev_nvme_attach_controller" 00:28:21.753 }' 00:28:21.753 [2024-05-15 15:46:34.616665] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:28:21.753 [2024-05-15 15:46:34.616754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1401626 ] 00:28:21.753 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.753 [2024-05-15 15:46:34.657120] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:21.753 [2024-05-15 15:46:34.690919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.753 [2024-05-15 15:46:34.774397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.649 Running I/O for 1 seconds... 00:28:24.581 00:28:24.581 Latency(us) 00:28:24.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.581 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.581 Verification LBA range: start 0x0 length 0x400 00:28:24.581 Nvme1n1 : 1.14 224.14 14.01 0.00 0.00 282743.28 24758.04 284280.60 00:28:24.581 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.581 Verification LBA range: start 0x0 length 0x400 00:28:24.581 Nvme2n1 : 1.15 222.41 13.90 0.00 0.00 280322.28 21651.15 271853.04 00:28:24.581 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.581 Verification LBA range: start 0x0 length 0x400 00:28:24.581 Nvme3n1 : 1.14 225.08 14.07 0.00 0.00 272408.46 17670.45 268746.15 00:28:24.581 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.581 Verification LBA range: start 0x0 length 0x400 00:28:24.581 Nvme4n1 : 1.11 232.68 14.54 0.00 0.00 256597.38 9223.59 264085.81 00:28:24.581 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.581 Verification LBA range: start 0x0 length 0x400 00:28:24.581 Nvme5n1 : 1.17 217.93 13.62 0.00 0.00 271309.18 20291.89 265639.25 00:28:24.581 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.581 Verification LBA range: start 0x0 length 0x400 00:28:24.581 Nvme6n1 : 1.12 240.39 15.02 0.00 0.00 238142.90 8980.86 234570.33 00:28:24.581 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.581 Verification LBA range: start 0x0 length 0x400 00:28:24.581 Nvme7n1 : 1.13 241.92 15.12 0.00 0.00 234167.18 10728.49 278066.82 00:28:24.581 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.581 Verification LBA range: start 0x0 length 0x400 00:28:24.581 Nvme8n1 : 1.15 223.25 13.95 0.00 0.00 252603.35 16699.54 265639.25 00:28:24.581 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.581 Verification LBA range: start 0x0 length 0x400 00:28:24.581 Nvme9n1 : 1.18 216.53 13.53 0.00 0.00 257047.70 22524.97 301368.51 00:28:24.581 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.581 Verification LBA range: start 0x0 length 0x400 00:28:24.582 Nvme10n1 : 1.19 268.48 16.78 0.00 0.00 203864.75 6602.15 271853.04 00:28:24.582 =================================================================================================================== 00:28:24.582 Total : 2312.81 144.55 0.00 0.00 253464.84 6602.15 301368.51 00:28:24.582 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:28:24.582 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:24.582 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:24.582 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:24.839 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:24.839 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:24.839 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:28:24.839 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:24.839 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:28:24.839 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:24.839 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:24.839 rmmod nvme_tcp 00:28:24.839 rmmod nvme_fabrics 00:28:24.839 rmmod nvme_keyring 00:28:24.839 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:24.839 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:28:24.839 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:28:24.839 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1401030 ']' 00:28:24.839 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1401030 00:28:24.839 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 1401030 ']' 00:28:24.839 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 1401030 00:28:24.839 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:28:24.839 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:24.839 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1401030 00:28:24.839 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:24.839 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:24.839 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1401030' 00:28:24.839 killing process with pid 1401030 00:28:24.839 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 1401030 00:28:24.839 [2024-05-15 15:46:37.770075] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:24.839 15:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 1401030 00:28:25.405 15:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:25.405 15:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:25.405 15:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:25.405 15:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:25.405 15:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:25.405 15:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.405 15:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:25.405 15:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.305 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:27.305 00:28:27.305 real 0m12.187s 00:28:27.305 user 0m34.133s 00:28:27.305 sys 0m3.541s 00:28:27.305 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:27.305 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:27.305 ************************************ 00:28:27.305 END TEST nvmf_shutdown_tc1 00:28:27.305 ************************************ 00:28:27.305 15:46:40 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:27.305 15:46:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:27.305 15:46:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:27.305 15:46:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:27.305 ************************************ 00:28:27.305 START TEST nvmf_shutdown_tc2 00:28:27.305 ************************************ 00:28:27.305 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:28:27.305 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:28:27.305 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:27.306 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:27.306 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:27.306 Found net devices under 0000:09:00.0: cvl_0_0 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:27.306 Found net devices under 0000:09:00.1: cvl_0_1 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:27.306 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:27.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:27.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:28:27.565 00:28:27.565 --- 10.0.0.2 ping statistics --- 00:28:27.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.565 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:27.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:27.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:28:27.565 00:28:27.565 --- 10.0.0.1 ping statistics --- 00:28:27.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.565 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1402391 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1402391 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1402391 ']' 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:27.565 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.565 [2024-05-15 15:46:40.586260] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:28:27.565 [2024-05-15 15:46:40.586347] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:27.565 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.565 [2024-05-15 15:46:40.637452] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:27.823 [2024-05-15 15:46:40.673652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:27.823 [2024-05-15 15:46:40.761700] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:27.823 [2024-05-15 15:46:40.761757] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:27.823 [2024-05-15 15:46:40.761785] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:27.823 [2024-05-15 15:46:40.761800] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:27.823 [2024-05-15 15:46:40.761813] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:27.823 [2024-05-15 15:46:40.761896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:27.823 [2024-05-15 15:46:40.762012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:27.823 [2024-05-15 15:46:40.762080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.823 [2024-05-15 15:46:40.762077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.823 [2024-05-15 15:46:40.898738] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.823 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:28.081 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:28.081 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:28.081 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:28.081 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:28.081 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:28.081 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.081 15:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.081 Malloc1 00:28:28.081 [2024-05-15 15:46:40.973462] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:28.081 [2024-05-15 15:46:40.973767] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:28.081 Malloc2 00:28:28.081 Malloc3 00:28:28.081 Malloc4 00:28:28.081 Malloc5 00:28:28.383 Malloc6 00:28:28.383 Malloc7 00:28:28.383 Malloc8 00:28:28.383 Malloc9 00:28:28.383 Malloc10 00:28:28.383 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.383 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:28.383 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:28.383 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.383 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1402570 00:28:28.383 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1402570 /var/tmp/bdevperf.sock 00:28:28.383 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1402570 ']' 00:28:28.383 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:28.383 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:28.383 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:28.383 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:28.383 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:28.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:28.383 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:28:28.383 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:28.383 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:28:28.383 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.383 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:28.383 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:28.383 { 00:28:28.383 "params": { 00:28:28.383 "name": "Nvme$subsystem", 00:28:28.383 "trtype": "$TEST_TRANSPORT", 00:28:28.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.383 "adrfam": "ipv4", 00:28:28.383 "trsvcid": "$NVMF_PORT", 00:28:28.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.383 "hdgst": ${hdgst:-false}, 00:28:28.383 "ddgst": ${ddgst:-false} 00:28:28.383 }, 00:28:28.383 "method": "bdev_nvme_attach_controller" 00:28:28.383 } 00:28:28.383 EOF 00:28:28.383 )") 00:28:28.383 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:28.670 { 00:28:28.670 "params": { 00:28:28.670 "name": "Nvme$subsystem", 00:28:28.670 "trtype": "$TEST_TRANSPORT", 00:28:28.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.670 "adrfam": "ipv4", 00:28:28.670 "trsvcid": "$NVMF_PORT", 00:28:28.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.670 "hdgst": ${hdgst:-false}, 00:28:28.670 "ddgst": ${ddgst:-false} 00:28:28.670 }, 00:28:28.670 "method": "bdev_nvme_attach_controller" 00:28:28.670 } 00:28:28.670 EOF 00:28:28.670 )") 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:28.670 { 00:28:28.670 "params": { 00:28:28.670 "name": "Nvme$subsystem", 00:28:28.670 "trtype": "$TEST_TRANSPORT", 00:28:28.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.670 "adrfam": "ipv4", 00:28:28.670 "trsvcid": "$NVMF_PORT", 00:28:28.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.670 "hdgst": ${hdgst:-false}, 00:28:28.670 "ddgst": ${ddgst:-false} 00:28:28.670 }, 00:28:28.670 "method": "bdev_nvme_attach_controller" 00:28:28.670 } 00:28:28.670 EOF 00:28:28.670 )") 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:28.670 { 00:28:28.670 "params": { 00:28:28.670 "name": "Nvme$subsystem", 00:28:28.670 "trtype": "$TEST_TRANSPORT", 00:28:28.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.670 "adrfam": "ipv4", 00:28:28.670 "trsvcid": "$NVMF_PORT", 00:28:28.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.670 "hdgst": ${hdgst:-false}, 00:28:28.670 "ddgst": ${ddgst:-false} 00:28:28.670 }, 00:28:28.670 "method": "bdev_nvme_attach_controller" 00:28:28.670 } 00:28:28.670 EOF 00:28:28.670 )") 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:28.670 { 00:28:28.670 "params": { 00:28:28.670 "name": "Nvme$subsystem", 00:28:28.670 "trtype": "$TEST_TRANSPORT", 00:28:28.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.670 "adrfam": "ipv4", 00:28:28.670 "trsvcid": "$NVMF_PORT", 00:28:28.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.670 "hdgst": ${hdgst:-false}, 00:28:28.670 "ddgst": ${ddgst:-false} 00:28:28.670 }, 00:28:28.670 "method": "bdev_nvme_attach_controller" 00:28:28.670 } 00:28:28.670 EOF 00:28:28.670 )") 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:28.670 { 00:28:28.670 "params": { 00:28:28.670 "name": "Nvme$subsystem", 00:28:28.670 "trtype": "$TEST_TRANSPORT", 00:28:28.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.670 "adrfam": "ipv4", 00:28:28.670 "trsvcid": "$NVMF_PORT", 00:28:28.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.670 "hdgst": ${hdgst:-false}, 00:28:28.670 "ddgst": ${ddgst:-false} 00:28:28.670 }, 00:28:28.670 "method": "bdev_nvme_attach_controller" 00:28:28.670 } 00:28:28.670 EOF 00:28:28.670 )") 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:28.670 { 00:28:28.670 "params": { 00:28:28.670 "name": "Nvme$subsystem", 00:28:28.670 "trtype": "$TEST_TRANSPORT", 00:28:28.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.670 "adrfam": "ipv4", 00:28:28.670 "trsvcid": "$NVMF_PORT", 00:28:28.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.670 "hdgst": ${hdgst:-false}, 00:28:28.670 "ddgst": ${ddgst:-false} 00:28:28.670 }, 00:28:28.670 "method": "bdev_nvme_attach_controller" 00:28:28.670 } 00:28:28.670 EOF 00:28:28.670 )") 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:28.670 { 00:28:28.670 "params": { 00:28:28.670 "name": "Nvme$subsystem", 00:28:28.670 "trtype": "$TEST_TRANSPORT", 00:28:28.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.670 "adrfam": "ipv4", 00:28:28.670 "trsvcid": "$NVMF_PORT", 00:28:28.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.670 "hdgst": ${hdgst:-false}, 00:28:28.670 "ddgst": ${ddgst:-false} 00:28:28.670 }, 00:28:28.670 "method": "bdev_nvme_attach_controller" 00:28:28.670 } 00:28:28.670 EOF 00:28:28.670 )") 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:28.670 { 00:28:28.670 "params": { 00:28:28.670 "name": "Nvme$subsystem", 00:28:28.670 "trtype": "$TEST_TRANSPORT", 00:28:28.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.670 "adrfam": "ipv4", 00:28:28.670 "trsvcid": "$NVMF_PORT", 00:28:28.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.670 "hdgst": ${hdgst:-false}, 00:28:28.670 "ddgst": ${ddgst:-false} 00:28:28.670 }, 00:28:28.670 "method": "bdev_nvme_attach_controller" 00:28:28.670 } 00:28:28.670 EOF 00:28:28.670 )") 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:28.670 { 00:28:28.670 "params": { 00:28:28.670 "name": "Nvme$subsystem", 00:28:28.670 "trtype": "$TEST_TRANSPORT", 00:28:28.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.670 "adrfam": "ipv4", 00:28:28.670 "trsvcid": "$NVMF_PORT", 00:28:28.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.670 "hdgst": ${hdgst:-false}, 00:28:28.670 "ddgst": ${ddgst:-false} 00:28:28.670 }, 00:28:28.670 "method": "bdev_nvme_attach_controller" 00:28:28.670 } 00:28:28.670 EOF 00:28:28.670 )") 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:28:28.670 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:28:28.671 15:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:28.671 "params": { 00:28:28.671 "name": "Nvme1", 00:28:28.671 "trtype": "tcp", 00:28:28.671 "traddr": "10.0.0.2", 00:28:28.671 "adrfam": "ipv4", 00:28:28.671 "trsvcid": "4420", 00:28:28.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:28.671 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:28.671 "hdgst": false, 00:28:28.671 "ddgst": false 00:28:28.671 }, 00:28:28.671 "method": "bdev_nvme_attach_controller" 00:28:28.671 },{ 00:28:28.671 "params": { 00:28:28.671 "name": "Nvme2", 00:28:28.671 "trtype": "tcp", 00:28:28.671 "traddr": "10.0.0.2", 00:28:28.671 "adrfam": "ipv4", 00:28:28.671 "trsvcid": "4420", 00:28:28.671 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:28.671 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:28.671 "hdgst": false, 00:28:28.671 "ddgst": false 00:28:28.671 }, 00:28:28.671 "method": "bdev_nvme_attach_controller" 00:28:28.671 },{ 00:28:28.671 "params": { 00:28:28.671 "name": "Nvme3", 00:28:28.671 "trtype": "tcp", 00:28:28.671 "traddr": "10.0.0.2", 00:28:28.671 "adrfam": "ipv4", 00:28:28.671 "trsvcid": "4420", 00:28:28.671 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:28.671 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:28.671 "hdgst": false, 00:28:28.671 "ddgst": false 00:28:28.671 }, 00:28:28.671 "method": "bdev_nvme_attach_controller" 00:28:28.671 },{ 00:28:28.671 "params": { 00:28:28.671 "name": "Nvme4", 00:28:28.671 "trtype": "tcp", 00:28:28.671 "traddr": "10.0.0.2", 00:28:28.671 "adrfam": "ipv4", 00:28:28.671 "trsvcid": "4420", 00:28:28.671 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:28.671 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:28.671 "hdgst": false, 00:28:28.671 "ddgst": false 00:28:28.671 }, 00:28:28.671 "method": "bdev_nvme_attach_controller" 00:28:28.671 },{ 00:28:28.671 "params": { 00:28:28.671 "name": "Nvme5", 00:28:28.671 "trtype": "tcp", 00:28:28.671 "traddr": "10.0.0.2", 00:28:28.671 "adrfam": "ipv4", 00:28:28.671 "trsvcid": "4420", 00:28:28.671 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:28.671 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:28.671 "hdgst": false, 00:28:28.671 "ddgst": false 00:28:28.671 }, 00:28:28.671 "method": "bdev_nvme_attach_controller" 00:28:28.671 },{ 00:28:28.671 "params": { 00:28:28.671 "name": "Nvme6", 00:28:28.671 "trtype": "tcp", 00:28:28.671 "traddr": "10.0.0.2", 00:28:28.671 "adrfam": "ipv4", 00:28:28.671 "trsvcid": "4420", 00:28:28.671 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:28.671 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:28.671 "hdgst": false, 00:28:28.671 "ddgst": false 00:28:28.671 }, 00:28:28.671 "method": "bdev_nvme_attach_controller" 00:28:28.671 },{ 00:28:28.671 "params": { 00:28:28.671 "name": "Nvme7", 00:28:28.671 "trtype": "tcp", 00:28:28.671 "traddr": "10.0.0.2", 00:28:28.671 "adrfam": "ipv4", 00:28:28.671 "trsvcid": "4420", 00:28:28.671 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:28.671 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:28.671 "hdgst": false, 00:28:28.671 "ddgst": false 00:28:28.671 }, 00:28:28.671 "method": "bdev_nvme_attach_controller" 00:28:28.671 },{ 00:28:28.671 "params": { 00:28:28.671 "name": "Nvme8", 00:28:28.671 "trtype": "tcp", 00:28:28.671 "traddr": "10.0.0.2", 00:28:28.671 "adrfam": "ipv4", 00:28:28.671 "trsvcid": "4420", 00:28:28.671 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:28.671 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:28.671 "hdgst": false, 00:28:28.671 "ddgst": false 00:28:28.671 }, 00:28:28.671 "method": "bdev_nvme_attach_controller" 00:28:28.671 },{ 00:28:28.671 "params": { 00:28:28.671 "name": "Nvme9", 00:28:28.671 "trtype": "tcp", 00:28:28.671 "traddr": "10.0.0.2", 00:28:28.671 "adrfam": "ipv4", 00:28:28.671 "trsvcid": "4420", 00:28:28.671 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:28.671 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:28.671 "hdgst": false, 00:28:28.671 "ddgst": false 00:28:28.671 }, 00:28:28.671 "method": "bdev_nvme_attach_controller" 00:28:28.671 },{ 00:28:28.671 "params": { 00:28:28.671 "name": "Nvme10", 00:28:28.671 "trtype": "tcp", 00:28:28.671 "traddr": "10.0.0.2", 00:28:28.671 "adrfam": "ipv4", 00:28:28.671 "trsvcid": "4420", 00:28:28.671 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:28.671 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:28.671 "hdgst": false, 00:28:28.671 "ddgst": false 00:28:28.671 }, 00:28:28.671 "method": "bdev_nvme_attach_controller" 00:28:28.671 }' 00:28:28.671 [2024-05-15 15:46:41.492266] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:28:28.671 [2024-05-15 15:46:41.492342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1402570 ] 00:28:28.671 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.671 [2024-05-15 15:46:41.531313] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:28.671 [2024-05-15 15:46:41.565046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.671 [2024-05-15 15:46:41.648297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.044 Running I/O for 10 seconds... 00:28:30.609 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:30.609 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:28:30.609 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:30.609 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.609 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.609 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.609 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:30.609 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:30.609 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:30.609 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:28:30.609 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:28:30.609 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:30.609 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:30.609 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:30.609 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:30.609 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.609 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.609 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.609 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:30.609 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:30.609 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:30.867 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:30.867 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:30.867 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:30.867 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:30.867 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.867 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.867 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.867 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:30.867 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:30.867 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:28:30.867 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:28:30.867 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:28:30.867 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1402570 00:28:30.867 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 1402570 ']' 00:28:30.867 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 1402570 00:28:30.867 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:28:30.867 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:30.867 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1402570 00:28:30.867 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:30.867 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:30.867 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1402570' 00:28:30.867 killing process with pid 1402570 00:28:30.867 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 1402570 00:28:30.867 15:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 1402570 00:28:30.867 Received shutdown signal, test time was about 0.985250 seconds 00:28:30.867 00:28:30.867 Latency(us) 00:28:30.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.867 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.867 Verification LBA range: start 0x0 length 0x400 00:28:30.867 Nvme1n1 : 0.94 205.17 12.82 0.00 0.00 308306.93 21456.97 274959.93 00:28:30.867 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.867 Verification LBA range: start 0x0 length 0x400 00:28:30.867 Nvme2n1 : 0.93 217.98 13.62 0.00 0.00 280886.15 9126.49 264085.81 00:28:30.867 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.867 Verification LBA range: start 0x0 length 0x400 00:28:30.867 Nvme3n1 : 0.98 261.21 16.33 0.00 0.00 232989.96 20097.71 271853.04 00:28:30.867 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.867 Verification LBA range: start 0x0 length 0x400 00:28:30.867 Nvme4n1 : 0.98 262.15 16.38 0.00 0.00 227533.75 17573.36 250104.79 00:28:30.867 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.867 Verification LBA range: start 0x0 length 0x400 00:28:30.867 Nvme5n1 : 0.96 199.30 12.46 0.00 0.00 293171.96 21942.42 276513.37 00:28:30.867 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.867 Verification LBA range: start 0x0 length 0x400 00:28:30.867 Nvme6n1 : 0.96 200.99 12.56 0.00 0.00 284523.08 21554.06 278066.82 00:28:30.867 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.867 Verification LBA range: start 0x0 length 0x400 00:28:30.867 Nvme7n1 : 0.98 260.05 16.25 0.00 0.00 216206.41 21554.06 273406.48 00:28:30.867 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.867 Verification LBA range: start 0x0 length 0x400 00:28:30.867 Nvme8n1 : 0.94 207.75 12.98 0.00 0.00 261964.80 4271.98 274959.93 00:28:30.867 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.867 Verification LBA range: start 0x0 length 0x400 00:28:30.867 Nvme9n1 : 0.97 197.14 12.32 0.00 0.00 272680.58 24369.68 324670.20 00:28:30.867 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.867 Verification LBA range: start 0x0 length 0x400 00:28:30.867 Nvme10n1 : 0.97 198.15 12.38 0.00 0.00 265564.41 22622.06 274959.93 00:28:30.867 =================================================================================================================== 00:28:30.867 Total : 2209.87 138.12 0.00 0.00 260960.66 4271.98 324670.20 00:28:31.124 15:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:28:32.055 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1402391 00:28:32.055 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:28:32.055 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:32.055 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:32.055 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:32.055 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:32.055 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:32.055 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:28:32.055 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:32.055 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:28:32.055 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:32.055 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:32.313 rmmod nvme_tcp 00:28:32.313 rmmod nvme_fabrics 00:28:32.313 rmmod nvme_keyring 00:28:32.313 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:32.313 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:28:32.313 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:28:32.313 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1402391 ']' 00:28:32.313 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1402391 00:28:32.313 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 1402391 ']' 00:28:32.313 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 1402391 00:28:32.313 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:28:32.313 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:32.313 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1402391 00:28:32.313 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:32.313 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:32.313 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1402391' 00:28:32.313 killing process with pid 1402391 00:28:32.313 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 1402391 00:28:32.313 [2024-05-15 15:46:45.258452] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:32.313 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 1402391 00:28:32.878 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:32.878 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:32.878 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:32.878 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:32.878 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:32.878 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.878 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:32.878 15:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:34.778 00:28:34.778 real 0m7.454s 00:28:34.778 user 0m21.861s 00:28:34.778 sys 0m1.534s 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.778 ************************************ 00:28:34.778 END TEST nvmf_shutdown_tc2 00:28:34.778 ************************************ 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:34.778 ************************************ 00:28:34.778 START TEST nvmf_shutdown_tc3 00:28:34.778 ************************************ 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:34.778 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.778 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.779 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:34.779 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:34.779 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:34.779 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:34.779 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:34.779 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:34.779 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.779 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.779 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:34.779 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:34.779 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:34.779 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:34.779 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:34.779 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.779 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:34.779 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.779 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:34.779 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:34.779 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.779 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:34.779 Found net devices under 0000:09:00.0: cvl_0_0 00:28:34.779 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.779 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:34.779 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.779 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:34.779 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:35.037 Found net devices under 0000:09:00.1: cvl_0_1 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:35.037 15:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:35.037 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:35.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:35.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:28:35.037 00:28:35.037 --- 10.0.0.2 ping statistics --- 00:28:35.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.037 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:28:35.037 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:35.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:35.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:28:35.037 00:28:35.037 --- 10.0.0.1 ping statistics --- 00:28:35.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.037 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:28:35.037 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:35.037 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:28:35.037 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:35.037 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:35.037 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:35.037 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:35.037 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:35.037 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:35.037 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:35.037 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:35.037 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:35.037 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:35.037 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.037 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1403364 00:28:35.037 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:35.037 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1403364 00:28:35.037 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 1403364 ']' 00:28:35.037 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:35.037 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:35.037 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:35.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:35.037 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:35.037 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.037 [2024-05-15 15:46:48.089182] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:28:35.037 [2024-05-15 15:46:48.089294] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:35.037 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.037 [2024-05-15 15:46:48.137591] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:35.295 [2024-05-15 15:46:48.171297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:35.295 [2024-05-15 15:46:48.256059] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:35.295 [2024-05-15 15:46:48.256108] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:35.295 [2024-05-15 15:46:48.256128] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:35.295 [2024-05-15 15:46:48.256139] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:35.295 [2024-05-15 15:46:48.256149] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:35.295 [2024-05-15 15:46:48.256267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:35.295 [2024-05-15 15:46:48.256361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:35.295 [2024-05-15 15:46:48.256427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:35.295 [2024-05-15 15:46:48.256430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.295 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:35.295 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:28:35.295 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:35.295 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:35.295 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.553 [2024-05-15 15:46:48.419044] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.553 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.553 Malloc1 00:28:35.553 [2024-05-15 15:46:48.508659] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:35.553 [2024-05-15 15:46:48.509001] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.553 Malloc2 00:28:35.553 Malloc3 00:28:35.553 Malloc4 00:28:35.810 Malloc5 00:28:35.810 Malloc6 00:28:35.810 Malloc7 00:28:35.810 Malloc8 00:28:35.810 Malloc9 00:28:36.069 Malloc10 00:28:36.069 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.069 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:36.069 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:36.069 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:36.069 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1403538 00:28:36.069 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1403538 /var/tmp/bdevperf.sock 00:28:36.069 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 1403538 ']' 00:28:36.069 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:36.069 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:36.069 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:36.069 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:36.069 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:36.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:36.069 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:28:36.069 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:36.069 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:28:36.069 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:36.069 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.069 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.069 { 00:28:36.069 "params": { 00:28:36.069 "name": "Nvme$subsystem", 00:28:36.069 "trtype": "$TEST_TRANSPORT", 00:28:36.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.069 "adrfam": "ipv4", 00:28:36.069 "trsvcid": "$NVMF_PORT", 00:28:36.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.069 "hdgst": ${hdgst:-false}, 00:28:36.069 "ddgst": ${ddgst:-false} 00:28:36.069 }, 00:28:36.069 "method": "bdev_nvme_attach_controller" 00:28:36.069 } 00:28:36.069 EOF 00:28:36.069 )") 00:28:36.069 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:36.069 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.069 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.069 { 00:28:36.069 "params": { 00:28:36.069 "name": "Nvme$subsystem", 00:28:36.069 "trtype": "$TEST_TRANSPORT", 00:28:36.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.069 "adrfam": "ipv4", 00:28:36.069 "trsvcid": "$NVMF_PORT", 00:28:36.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.069 "hdgst": ${hdgst:-false}, 00:28:36.069 "ddgst": ${ddgst:-false} 00:28:36.069 }, 00:28:36.069 "method": "bdev_nvme_attach_controller" 00:28:36.069 } 00:28:36.069 EOF 00:28:36.069 )") 00:28:36.069 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:36.069 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.069 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.069 { 00:28:36.069 "params": { 00:28:36.069 "name": "Nvme$subsystem", 00:28:36.069 "trtype": "$TEST_TRANSPORT", 00:28:36.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.070 "adrfam": "ipv4", 00:28:36.070 "trsvcid": "$NVMF_PORT", 00:28:36.070 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.070 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.070 "hdgst": ${hdgst:-false}, 00:28:36.070 "ddgst": ${ddgst:-false} 00:28:36.070 }, 00:28:36.070 "method": "bdev_nvme_attach_controller" 00:28:36.070 } 00:28:36.070 EOF 00:28:36.070 )") 00:28:36.070 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:36.070 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.070 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.070 { 00:28:36.070 "params": { 00:28:36.070 "name": "Nvme$subsystem", 00:28:36.070 "trtype": "$TEST_TRANSPORT", 00:28:36.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.070 "adrfam": "ipv4", 00:28:36.070 "trsvcid": "$NVMF_PORT", 00:28:36.070 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.070 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.070 "hdgst": ${hdgst:-false}, 00:28:36.070 "ddgst": ${ddgst:-false} 00:28:36.070 }, 00:28:36.070 "method": "bdev_nvme_attach_controller" 00:28:36.070 } 00:28:36.070 EOF 00:28:36.070 )") 00:28:36.070 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:36.070 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.070 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.070 { 00:28:36.070 "params": { 00:28:36.070 "name": "Nvme$subsystem", 00:28:36.070 "trtype": "$TEST_TRANSPORT", 00:28:36.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.070 "adrfam": "ipv4", 00:28:36.070 "trsvcid": "$NVMF_PORT", 00:28:36.070 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.070 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.070 "hdgst": ${hdgst:-false}, 00:28:36.070 "ddgst": ${ddgst:-false} 00:28:36.070 }, 00:28:36.070 "method": "bdev_nvme_attach_controller" 00:28:36.070 } 00:28:36.070 EOF 00:28:36.070 )") 00:28:36.070 15:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:36.070 15:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.070 15:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.070 { 00:28:36.070 "params": { 00:28:36.070 "name": "Nvme$subsystem", 00:28:36.070 "trtype": "$TEST_TRANSPORT", 00:28:36.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.070 "adrfam": "ipv4", 00:28:36.070 "trsvcid": "$NVMF_PORT", 00:28:36.070 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.070 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.070 "hdgst": ${hdgst:-false}, 00:28:36.070 "ddgst": ${ddgst:-false} 00:28:36.070 }, 00:28:36.070 "method": "bdev_nvme_attach_controller" 00:28:36.070 } 00:28:36.070 EOF 00:28:36.070 )") 00:28:36.070 15:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:36.070 15:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.070 15:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.070 { 00:28:36.070 "params": { 00:28:36.070 "name": "Nvme$subsystem", 00:28:36.070 "trtype": "$TEST_TRANSPORT", 00:28:36.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.070 "adrfam": "ipv4", 00:28:36.070 "trsvcid": "$NVMF_PORT", 00:28:36.070 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.070 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.070 "hdgst": ${hdgst:-false}, 00:28:36.070 "ddgst": ${ddgst:-false} 00:28:36.070 }, 00:28:36.070 "method": "bdev_nvme_attach_controller" 00:28:36.070 } 00:28:36.070 EOF 00:28:36.070 )") 00:28:36.070 15:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:36.070 15:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.070 15:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.070 { 00:28:36.070 "params": { 00:28:36.070 "name": "Nvme$subsystem", 00:28:36.070 "trtype": "$TEST_TRANSPORT", 00:28:36.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.070 "adrfam": "ipv4", 00:28:36.070 "trsvcid": "$NVMF_PORT", 00:28:36.070 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.070 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.070 "hdgst": ${hdgst:-false}, 00:28:36.070 "ddgst": ${ddgst:-false} 00:28:36.070 }, 00:28:36.070 "method": "bdev_nvme_attach_controller" 00:28:36.070 } 00:28:36.070 EOF 00:28:36.070 )") 00:28:36.070 15:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:36.070 15:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.070 15:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.070 { 00:28:36.070 "params": { 00:28:36.070 "name": "Nvme$subsystem", 00:28:36.070 "trtype": "$TEST_TRANSPORT", 00:28:36.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.070 "adrfam": "ipv4", 00:28:36.070 "trsvcid": "$NVMF_PORT", 00:28:36.070 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.070 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.070 "hdgst": ${hdgst:-false}, 00:28:36.070 "ddgst": ${ddgst:-false} 00:28:36.070 }, 00:28:36.070 "method": "bdev_nvme_attach_controller" 00:28:36.070 } 00:28:36.070 EOF 00:28:36.070 )") 00:28:36.070 15:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:36.070 15:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.070 15:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.070 { 00:28:36.070 "params": { 00:28:36.070 "name": "Nvme$subsystem", 00:28:36.070 "trtype": "$TEST_TRANSPORT", 00:28:36.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.070 "adrfam": "ipv4", 00:28:36.070 "trsvcid": "$NVMF_PORT", 00:28:36.070 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.070 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.070 "hdgst": ${hdgst:-false}, 00:28:36.070 "ddgst": ${ddgst:-false} 00:28:36.070 }, 00:28:36.070 "method": "bdev_nvme_attach_controller" 00:28:36.070 } 00:28:36.070 EOF 00:28:36.070 )") 00:28:36.070 15:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:36.070 15:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:28:36.070 15:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:28:36.070 15:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:36.070 "params": { 00:28:36.070 "name": "Nvme1", 00:28:36.070 "trtype": "tcp", 00:28:36.070 "traddr": "10.0.0.2", 00:28:36.070 "adrfam": "ipv4", 00:28:36.070 "trsvcid": "4420", 00:28:36.070 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:36.070 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:36.070 "hdgst": false, 00:28:36.070 "ddgst": false 00:28:36.070 }, 00:28:36.070 "method": "bdev_nvme_attach_controller" 00:28:36.070 },{ 00:28:36.070 "params": { 00:28:36.070 "name": "Nvme2", 00:28:36.070 "trtype": "tcp", 00:28:36.070 "traddr": "10.0.0.2", 00:28:36.070 "adrfam": "ipv4", 00:28:36.070 "trsvcid": "4420", 00:28:36.070 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:36.070 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:36.070 "hdgst": false, 00:28:36.070 "ddgst": false 00:28:36.070 }, 00:28:36.070 "method": "bdev_nvme_attach_controller" 00:28:36.070 },{ 00:28:36.070 "params": { 00:28:36.070 "name": "Nvme3", 00:28:36.070 "trtype": "tcp", 00:28:36.070 "traddr": "10.0.0.2", 00:28:36.070 "adrfam": "ipv4", 00:28:36.070 "trsvcid": "4420", 00:28:36.070 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:36.070 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:36.071 "hdgst": false, 00:28:36.071 "ddgst": false 00:28:36.071 }, 00:28:36.071 "method": "bdev_nvme_attach_controller" 00:28:36.071 },{ 00:28:36.071 "params": { 00:28:36.071 "name": "Nvme4", 00:28:36.071 "trtype": "tcp", 00:28:36.071 "traddr": "10.0.0.2", 00:28:36.071 "adrfam": "ipv4", 00:28:36.071 "trsvcid": "4420", 00:28:36.071 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:36.071 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:36.071 "hdgst": false, 00:28:36.071 "ddgst": false 00:28:36.071 }, 00:28:36.071 "method": "bdev_nvme_attach_controller" 00:28:36.071 },{ 00:28:36.071 "params": { 00:28:36.071 "name": "Nvme5", 00:28:36.071 "trtype": "tcp", 00:28:36.071 "traddr": "10.0.0.2", 00:28:36.071 "adrfam": "ipv4", 00:28:36.071 "trsvcid": "4420", 00:28:36.071 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:36.071 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:36.071 "hdgst": false, 00:28:36.071 "ddgst": false 00:28:36.071 }, 00:28:36.071 "method": "bdev_nvme_attach_controller" 00:28:36.071 },{ 00:28:36.071 "params": { 00:28:36.071 "name": "Nvme6", 00:28:36.071 "trtype": "tcp", 00:28:36.071 "traddr": "10.0.0.2", 00:28:36.071 "adrfam": "ipv4", 00:28:36.071 "trsvcid": "4420", 00:28:36.071 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:36.071 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:36.071 "hdgst": false, 00:28:36.071 "ddgst": false 00:28:36.071 }, 00:28:36.071 "method": "bdev_nvme_attach_controller" 00:28:36.071 },{ 00:28:36.071 "params": { 00:28:36.071 "name": "Nvme7", 00:28:36.071 "trtype": "tcp", 00:28:36.071 "traddr": "10.0.0.2", 00:28:36.071 "adrfam": "ipv4", 00:28:36.071 "trsvcid": "4420", 00:28:36.071 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:36.071 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:36.071 "hdgst": false, 00:28:36.071 "ddgst": false 00:28:36.071 }, 00:28:36.071 "method": "bdev_nvme_attach_controller" 00:28:36.071 },{ 00:28:36.071 "params": { 00:28:36.071 "name": "Nvme8", 00:28:36.071 "trtype": "tcp", 00:28:36.071 "traddr": "10.0.0.2", 00:28:36.071 "adrfam": "ipv4", 00:28:36.071 "trsvcid": "4420", 00:28:36.071 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:36.071 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:36.071 "hdgst": false, 00:28:36.071 "ddgst": false 00:28:36.071 }, 00:28:36.071 "method": "bdev_nvme_attach_controller" 00:28:36.071 },{ 00:28:36.071 "params": { 00:28:36.071 "name": "Nvme9", 00:28:36.071 "trtype": "tcp", 00:28:36.071 "traddr": "10.0.0.2", 00:28:36.071 "adrfam": "ipv4", 00:28:36.071 "trsvcid": "4420", 00:28:36.071 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:36.071 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:36.071 "hdgst": false, 00:28:36.071 "ddgst": false 00:28:36.071 }, 00:28:36.071 "method": "bdev_nvme_attach_controller" 00:28:36.071 },{ 00:28:36.071 "params": { 00:28:36.071 "name": "Nvme10", 00:28:36.071 "trtype": "tcp", 00:28:36.071 "traddr": "10.0.0.2", 00:28:36.071 "adrfam": "ipv4", 00:28:36.071 "trsvcid": "4420", 00:28:36.071 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:36.071 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:36.071 "hdgst": false, 00:28:36.071 "ddgst": false 00:28:36.071 }, 00:28:36.071 "method": "bdev_nvme_attach_controller" 00:28:36.071 }' 00:28:36.071 [2024-05-15 15:46:49.027875] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:28:36.071 [2024-05-15 15:46:49.027962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1403538 ] 00:28:36.071 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.071 [2024-05-15 15:46:49.068473] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:36.071 [2024-05-15 15:46:49.103036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.329 [2024-05-15 15:46:49.187934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.702 Running I/O for 10 seconds... 00:28:37.960 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:37.960 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:28:37.960 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:37.960 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.960 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:37.960 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.960 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:37.960 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:37.960 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:37.960 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:37.960 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:28:37.960 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:28:37.960 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:37.960 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:37.960 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:37.960 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.960 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:37.960 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:38.220 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.220 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=17 00:28:38.220 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 17 -ge 100 ']' 00:28:38.220 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:38.478 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:38.478 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:38.478 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:38.478 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:38.478 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.478 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:38.478 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.478 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:38.478 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:38.478 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:38.753 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:38.753 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:38.753 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:38.753 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:38.753 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.753 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:38.753 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.753 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=136 00:28:38.753 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 136 -ge 100 ']' 00:28:38.753 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:28:38.753 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:28:38.753 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:28:38.753 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1403364 00:28:38.753 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 1403364 ']' 00:28:38.753 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 1403364 00:28:38.753 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:28:38.753 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:38.753 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1403364 00:28:38.753 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:38.753 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:38.753 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1403364' 00:28:38.753 killing process with pid 1403364 00:28:38.753 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 1403364 00:28:38.753 [2024-05-15 15:46:51.706874] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:38.753 15:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 1403364 00:28:38.753 [2024-05-15 15:46:51.707625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707982] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.707994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.708006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.708018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.708030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.708043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.708055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.708067] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.753 [2024-05-15 15:46:51.708079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.708092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.708105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.708118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.708131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.708143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.708156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.708169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.708181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.708211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.708234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.708248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.708272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.708284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.708297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.708309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.708322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.708334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.708346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.708358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.708371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490b10 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710975] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.710999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711138] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711189] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711392] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.711500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490fb0 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.713285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.713318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.713333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.754 [2024-05-15 15:46:51.713346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713390] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713975] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.713999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.714011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.714023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.714035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.714047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.714059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.714070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.714082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.714098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.714110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491450 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.715997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.755 [2024-05-15 15:46:51.716044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.755 [2024-05-15 15:46:51.716062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.755 [2024-05-15 15:46:51.716077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.755 [2024-05-15 15:46:51.716091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.755 [2024-05-15 15:46:51.716104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.755 [2024-05-15 15:46:51.716119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.755 [2024-05-15 15:46:51.716132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.755 [2024-05-15 15:46:51.716146] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16887d0 is same with the state(5) to be set 00:28:38.755 [2024-05-15 15:46:51.716200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.755 [2024-05-15 15:46:51.716211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with [2024-05-15 15:46:51.716228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:28:38.755 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.755 [2024-05-15 15:46:51.716249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-05-15 15:46:51.716249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with id:0 cdw10:00000000 cdw11:00000000 00:28:38.755 the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with [2024-05-15 15:46:51.716268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:28:38.756 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.756 [2024-05-15 15:46:51.716283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.756 [2024-05-15 15:46:51.716297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.756 [2024-05-15 15:46:51.716310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.756 [2024-05-15 15:46:51.716323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.756 [2024-05-15 15:46:51.716335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716348] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165d660 is same [2024-05-15 15:46:51.716349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with with the state(5) to be set 00:28:38.756 the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716402] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.756 [2024-05-15 15:46:51.716415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-05-15 15:46:51.716427] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.756 the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with [2024-05-15 15:46:51.716444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(5) to be set 00:28:38.756 id:0 cdw10:00000000 cdw11:00000000 00:28:38.756 [2024-05-15 15:46:51.716458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with [2024-05-15 15:46:51.716459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:28:38.756 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.756 [2024-05-15 15:46:51.716482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with [2024-05-15 15:46:51.716482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:28:38.756 id:0 cdw10:00000000 cdw11:00000000 00:28:38.756 [2024-05-15 15:46:51.716497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with [2024-05-15 15:46:51.716498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:28:38.756 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.756 [2024-05-15 15:46:51.716512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.756 [2024-05-15 15:46:51.716525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.756 [2024-05-15 15:46:51.716539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716542] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165b4f0 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.756 [2024-05-15 15:46:51.716597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.756 [2024-05-15 15:46:51.716615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.756 [2024-05-15 15:46:51.716629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.756 [2024-05-15 15:46:51.716644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.756 [2024-05-15 15:46:51.716658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.756 [2024-05-15 15:46:51.716672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.756 [2024-05-15 15:46:51.716685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.756 [2024-05-15 15:46:51.716698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716708] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680c10 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716712] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.756 [2024-05-15 15:46:51.716777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with [2024-05-15 15:46:51.716790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:28:38.756 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.756 [2024-05-15 15:46:51.716809] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.756 [2024-05-15 15:46:51.716823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.756 [2024-05-15 15:46:51.716836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.756 [2024-05-15 15:46:51.716849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.756 [2024-05-15 15:46:51.716862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.756 [2024-05-15 15:46:51.716876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.756 [2024-05-15 15:46:51.716889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716897] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a5f0 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716915] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.756 [2024-05-15 15:46:51.716978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.757 [2024-05-15 15:46:51.716991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.757 [2024-05-15 15:46:51.717004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.757 [2024-05-15 15:46:51.717017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.757 [2024-05-15 15:46:51.717030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.757 [2024-05-15 15:46:51.717046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.757 [2024-05-15 15:46:51.717059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.757 [2024-05-15 15:46:51.717072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.757 [2024-05-15 15:46:51.717084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.757 [2024-05-15 15:46:51.717097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491d90 is same with the state(5) to be set 00:28:38.757 [2024-05-15 15:46:51.717527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.717563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 [2024-05-15 15:46:51.717592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.717609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 [2024-05-15 15:46:51.717625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.717640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 [2024-05-15 15:46:51.717655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.717669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 [2024-05-15 15:46:51.717685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.717699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 [2024-05-15 15:46:51.717714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.717728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 [2024-05-15 15:46:51.717744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.717758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 [2024-05-15 15:46:51.717773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.717787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 [2024-05-15 15:46:51.717803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.717817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 [2024-05-15 15:46:51.717833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.717847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 [2024-05-15 15:46:51.717863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.717897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 [2024-05-15 15:46:51.717913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.717927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 [2024-05-15 15:46:51.717942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.717955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 [2024-05-15 15:46:51.717970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.717984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 [2024-05-15 15:46:51.717999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.718013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 [2024-05-15 15:46:51.718028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.718041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 [2024-05-15 15:46:51.718057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.718070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 [2024-05-15 15:46:51.718085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.718098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 [2024-05-15 15:46:51.718113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.718127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 [2024-05-15 15:46:51.718141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.718155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 [2024-05-15 15:46:51.718170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.718183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 [2024-05-15 15:46:51.718214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.718238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 [2024-05-15 15:46:51.718268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.718283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 15:46:51.718278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 the state(5) to be set 00:28:38.757 [2024-05-15 15:46:51.718307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with [2024-05-15 15:46:51.718307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:1the state(5) to be set 00:28:38.757 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.718323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.757 [2024-05-15 15:46:51.718326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 [2024-05-15 15:46:51.718336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.757 [2024-05-15 15:46:51.718344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.757 [2024-05-15 15:46:51.718349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.757 [2024-05-15 15:46:51.718360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.757 [2024-05-15 15:46:51.718362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.757 [2024-05-15 15:46:51.718376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.758 [2024-05-15 15:46:51.718389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.758 [2024-05-15 15:46:51.718401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.758 [2024-05-15 15:46:51.718414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 15:46:51.718427] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.758 the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.758 [2024-05-15 15:46:51.718454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.758 [2024-05-15 15:46:51.718467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718483] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.758 [2024-05-15 15:46:51.718496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.758 [2024-05-15 15:46:51.718509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.758 [2024-05-15 15:46:51.718548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.758 [2024-05-15 15:46:51.718562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with [2024-05-15 15:46:51.718575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:1the state(5) to be set 00:28:38.758 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.758 [2024-05-15 15:46:51.718589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with [2024-05-15 15:46:51.718591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:38.758 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.758 [2024-05-15 15:46:51.718604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.758 [2024-05-15 15:46:51.718616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.758 [2024-05-15 15:46:51.718629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.758 [2024-05-15 15:46:51.718642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.758 [2024-05-15 15:46:51.718655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1[2024-05-15 15:46:51.718668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.758 the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 15:46:51.718682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.758 the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.758 [2024-05-15 15:46:51.718709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.758 [2024-05-15 15:46:51.718723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.758 [2024-05-15 15:46:51.718736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 15:46:51.718749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.758 the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.758 [2024-05-15 15:46:51.718776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.758 [2024-05-15 15:46:51.718789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.758 [2024-05-15 15:46:51.718801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.758 [2024-05-15 15:46:51.718814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:1[2024-05-15 15:46:51.718827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.758 the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with [2024-05-15 15:46:51.718840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:38.758 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.758 [2024-05-15 15:46:51.718855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.758 [2024-05-15 15:46:51.718867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.758 [2024-05-15 15:46:51.718880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:1[2024-05-15 15:46:51.718893] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.758 the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with [2024-05-15 15:46:51.718907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:38.758 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.758 [2024-05-15 15:46:51.718920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.758 [2024-05-15 15:46:51.718924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.718933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.759 [2024-05-15 15:46:51.718939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.718946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.759 [2024-05-15 15:46:51.718955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.718958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.759 [2024-05-15 15:46:51.718969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 15:46:51.718971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 the state(5) to be set 00:28:38.759 [2024-05-15 15:46:51.718985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.759 [2024-05-15 15:46:51.718987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.718997] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.759 [2024-05-15 15:46:51.719001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.719009] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.759 [2024-05-15 15:46:51.719016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.719022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.759 [2024-05-15 15:46:51.719030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.719034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.759 [2024-05-15 15:46:51.719046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1[2024-05-15 15:46:51.719047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 the state(5) to be set 00:28:38.759 [2024-05-15 15:46:51.719062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with [2024-05-15 15:46:51.719062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:38.759 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.719076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with [2024-05-15 15:46:51.719080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128the state(5) to be set 00:28:38.759 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.719095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with [2024-05-15 15:46:51.719096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:38.759 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.719110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.759 [2024-05-15 15:46:51.719114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.719123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.759 [2024-05-15 15:46:51.719128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.719136] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.759 [2024-05-15 15:46:51.719143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.719148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492250 is same with the state(5) to be set 00:28:38.759 [2024-05-15 15:46:51.719158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.719173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.719187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.719224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.719240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.719267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.719281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.719296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.719310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.719326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.719340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.719355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.719368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.719384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.719397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.719413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.719431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.719447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.719460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.719480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.719495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.719511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.719525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.719541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.719555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.719586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.719601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.719617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.719630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.719646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.719659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.719705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.759 [2024-05-15 15:46:51.719795] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17c0430 was disconnected and freed. reset controller. 00:28:38.759 [2024-05-15 15:46:51.720173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.720199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.720226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.720248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.720270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.720285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.720300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.720315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.720336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.720351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.720367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.720381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.720397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.720412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.720428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.720443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.759 [2024-05-15 15:46:51.720459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.759 [2024-05-15 15:46:51.720484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.760 [2024-05-15 15:46:51.720501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.760 [2024-05-15 15:46:51.720516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.760 [2024-05-15 15:46:51.720532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.760 [2024-05-15 15:46:51.720547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.760 [2024-05-15 15:46:51.720563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.760 [2024-05-15 15:46:51.720578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.760 [2024-05-15 15:46:51.720593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.760 [2024-05-15 15:46:51.720608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.760 [2024-05-15 15:46:51.720625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.760 [2024-05-15 15:46:51.720640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.760 [2024-05-15 15:46:51.720656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.760 [2024-05-15 15:46:51.720651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.720671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.760 [2024-05-15 15:46:51.720679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.720688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.760 [2024-05-15 15:46:51.720695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.720706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.760 [2024-05-15 15:46:51.720709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.720723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with [2024-05-15 15:46:51.720723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:1the state(5) to be set 00:28:38.760 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.760 [2024-05-15 15:46:51.720738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.720740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.760 [2024-05-15 15:46:51.720751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.720757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.760 [2024-05-15 15:46:51.720764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.720773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.760 [2024-05-15 15:46:51.720778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.720789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:1[2024-05-15 15:46:51.720791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.760 the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.720806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 15:46:51.720807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.760 the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.720822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.720824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.760 [2024-05-15 15:46:51.720835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.720840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.760 [2024-05-15 15:46:51.720849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.720856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.760 [2024-05-15 15:46:51.720862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.720871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.760 [2024-05-15 15:46:51.720876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.720887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:1[2024-05-15 15:46:51.720890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.760 the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.720907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with [2024-05-15 15:46:51.720907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:38.760 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.760 [2024-05-15 15:46:51.720922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.720926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.760 [2024-05-15 15:46:51.720935] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.720941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.760 [2024-05-15 15:46:51.720965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.720974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.760 [2024-05-15 15:46:51.720978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.720988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.760 [2024-05-15 15:46:51.720992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.721005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with [2024-05-15 15:46:51.721004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:1the state(5) to be set 00:28:38.760 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.760 [2024-05-15 15:46:51.721019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.721022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.760 [2024-05-15 15:46:51.721033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.721038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.760 [2024-05-15 15:46:51.721046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.721053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.760 [2024-05-15 15:46:51.721059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.721069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.760 [2024-05-15 15:46:51.721073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.721083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.760 [2024-05-15 15:46:51.721086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.721099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with [2024-05-15 15:46:51.721100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:1the state(5) to be set 00:28:38.760 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.760 [2024-05-15 15:46:51.721119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with [2024-05-15 15:46:51.721120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:38.760 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.760 [2024-05-15 15:46:51.721134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.721138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.760 [2024-05-15 15:46:51.721147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.721153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.760 [2024-05-15 15:46:51.721161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.721169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.760 [2024-05-15 15:46:51.721173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.721184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 15:46:51.721186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.760 the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.721223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with [2024-05-15 15:46:51.721224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1the state(5) to be set 00:28:38.760 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.760 [2024-05-15 15:46:51.721240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.760 [2024-05-15 15:46:51.721244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.721261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.761 [2024-05-15 15:46:51.721264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.721274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.761 [2024-05-15 15:46:51.721279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.721288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.761 [2024-05-15 15:46:51.721295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.721301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.761 [2024-05-15 15:46:51.721310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.721314] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.761 [2024-05-15 15:46:51.721326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:1[2024-05-15 15:46:51.721327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 the state(5) to be set 00:28:38.761 [2024-05-15 15:46:51.721345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 15:46:51.721346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 the state(5) to be set 00:28:38.761 [2024-05-15 15:46:51.721361] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.761 [2024-05-15 15:46:51.721363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.721374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.761 [2024-05-15 15:46:51.721378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.721387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.761 [2024-05-15 15:46:51.721394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.721399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.761 [2024-05-15 15:46:51.721409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.721412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.761 [2024-05-15 15:46:51.721425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with [2024-05-15 15:46:51.721425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:1the state(5) to be set 00:28:38.761 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.721439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.761 [2024-05-15 15:46:51.721441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.721453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.761 [2024-05-15 15:46:51.721458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.721466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.761 [2024-05-15 15:46:51.721473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.721479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.761 [2024-05-15 15:46:51.721490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.721501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.761 [2024-05-15 15:46:51.721504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.721514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.761 [2024-05-15 15:46:51.721521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.721527] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.761 [2024-05-15 15:46:51.721539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 15:46:51.721541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 the state(5) to be set 00:28:38.761 [2024-05-15 15:46:51.721555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.761 [2024-05-15 15:46:51.721558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.721568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with the state(5) to be set 00:28:38.761 [2024-05-15 15:46:51.721579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 15:46:51.721580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24926f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 the state(5) to be set 00:28:38.761 [2024-05-15 15:46:51.721597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.721612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.721628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.721642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.721659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.721673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.721690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.721704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.721720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.721733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.721749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.721762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.721793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.721807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.721823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.721835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.721850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.721863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.721886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.721901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.721916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.721929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.721944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.721957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.721972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.721985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.722000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.722013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.722028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.722041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.722056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.722074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.722090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.761 [2024-05-15 15:46:51.722103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.761 [2024-05-15 15:46:51.722119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.762 [2024-05-15 15:46:51.722132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.762 [2024-05-15 15:46:51.722147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.762 [2024-05-15 15:46:51.722160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.762 [2024-05-15 15:46:51.722175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.762 [2024-05-15 15:46:51.722189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.762 [2024-05-15 15:46:51.722227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.762 [2024-05-15 15:46:51.722243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.762 [2024-05-15 15:46:51.722268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.762 [2024-05-15 15:46:51.722286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.762 [2024-05-15 15:46:51.722301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.762 [2024-05-15 15:46:51.722315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.762 [2024-05-15 15:46:51.722718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.722747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.722763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.722777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.722789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.722802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.722814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.722826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.722839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.722851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.722864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.722877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.722881] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x170dea0 was disconnected and fr[2024-05-15 15:46:51.722889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with eed. reset controller. 00:28:38.762 the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.722904] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.722917] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.722930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.722943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.722955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.722968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.722981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.722993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723096] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723361] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723427] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.762 [2024-05-15 15:46:51.723550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.723562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.723574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.723602] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b90 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724402] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724446] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724683] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724904] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724916] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.724996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.725008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.725021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.725033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.725046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.725059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.725071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.725083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.725096] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.725108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.725121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.725133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.725145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.725158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.725170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.725183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.725196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.725208] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.725228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493030 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.725623] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:38.763 [2024-05-15 15:46:51.725658] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:38.763 [2024-05-15 15:46:51.725681] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181a5f0 (9): Bad file descriptor 00:28:38.763 [2024-05-15 15:46:51.725703] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1680c10 (9): Bad file descriptor 00:28:38.763 [2024-05-15 15:46:51.726306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.763 [2024-05-15 15:46:51.726336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.763 [2024-05-15 15:46:51.726354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.763 [2024-05-15 15:46:51.726368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.763 [2024-05-15 15:46:51.726382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.763 [2024-05-15 15:46:51.726395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.763 [2024-05-15 15:46:51.726409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.763 [2024-05-15 15:46:51.726422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.763 [2024-05-15 15:46:51.726435] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1153610 is same with the state(5) to be set 00:28:38.763 [2024-05-15 15:46:51.726483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.763 [2024-05-15 15:46:51.726509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.763 [2024-05-15 15:46:51.726524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.763 [2024-05-15 15:46:51.726539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.763 [2024-05-15 15:46:51.726555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.763 [2024-05-15 15:46:51.726568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.763 [2024-05-15 15:46:51.726583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.763 [2024-05-15 15:46:51.726596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.726610] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1737370 is same with the state(5) to be set 00:28:38.764 [2024-05-15 15:46:51.726637] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16887d0 (9): Bad file descriptor 00:28:38.764 [2024-05-15 15:46:51.726668] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165d660 (9): Bad file descriptor 00:28:38.764 [2024-05-15 15:46:51.726717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.764 [2024-05-15 15:46:51.726738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.726753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.764 [2024-05-15 15:46:51.726767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.726781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.764 [2024-05-15 15:46:51.726795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.726809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.764 [2024-05-15 15:46:51.726833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.726847] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b50 is same with the state(5) to be set 00:28:38.764 [2024-05-15 15:46:51.726875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165b4f0 (9): Bad file descriptor 00:28:38.764 [2024-05-15 15:46:51.726923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.764 [2024-05-15 15:46:51.726945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.726961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.764 [2024-05-15 15:46:51.726976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.727000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.764 [2024-05-15 15:46:51.727025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.727047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.764 [2024-05-15 15:46:51.727079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.727150] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173e030 is same with the state(5) to be set 00:28:38.764 [2024-05-15 15:46:51.727288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.764 [2024-05-15 15:46:51.727310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.727341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.764 [2024-05-15 15:46:51.727378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.727415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.764 [2024-05-15 15:46:51.727452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.727524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.764 [2024-05-15 15:46:51.727608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.727685] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180af80 is same with the state(5) to be set 00:28:38.764 [2024-05-15 15:46:51.729236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.764 [2024-05-15 15:46:51.729383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.764 [2024-05-15 15:46:51.729409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1680c10 with addr=10.0.0.2, port=4420 00:28:38.764 [2024-05-15 15:46:51.729426] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680c10 is same with the state(5) to be set 00:28:38.764 [2024-05-15 15:46:51.729544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.764 [2024-05-15 15:46:51.729680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.764 [2024-05-15 15:46:51.729709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x181a5f0 with addr=10.0.0.2, port=4420 00:28:38.764 [2024-05-15 15:46:51.729726] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a5f0 is same with the state(5) to be set 00:28:38.764 [2024-05-15 15:46:51.729779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.764 [2024-05-15 15:46:51.729803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.729825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.764 [2024-05-15 15:46:51.729841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.729858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.764 [2024-05-15 15:46:51.729873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.729888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.764 [2024-05-15 15:46:51.729903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.729919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.764 [2024-05-15 15:46:51.729933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.729950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.764 [2024-05-15 15:46:51.729964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.729981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.764 [2024-05-15 15:46:51.729995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.730011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.764 [2024-05-15 15:46:51.730026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.730042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.764 [2024-05-15 15:46:51.730056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.730072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.764 [2024-05-15 15:46:51.730087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.730103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.764 [2024-05-15 15:46:51.730118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.730134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.764 [2024-05-15 15:46:51.730153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.754138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.764 [2024-05-15 15:46:51.754205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.754240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.764 [2024-05-15 15:46:51.754263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.754280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.764 [2024-05-15 15:46:51.754295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.754311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.764 [2024-05-15 15:46:51.754326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.754342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.764 [2024-05-15 15:46:51.754357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.754373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.764 [2024-05-15 15:46:51.754387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.754404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.764 [2024-05-15 15:46:51.754418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.754434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.764 [2024-05-15 15:46:51.754449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.754465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.764 [2024-05-15 15:46:51.754486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.764 [2024-05-15 15:46:51.754502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.764 [2024-05-15 15:46:51.754517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.754534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.754548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.754565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.754579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.754608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.754624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.754641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.754655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.754671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.754686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.754702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.754717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.754733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.754747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.754763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.754778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.754794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.754808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.754824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.754839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.754855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.754869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.754885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.754899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.754916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.754930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.754946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.754961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.754977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.754994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.755012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.755026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.755043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.755057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.755073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.755088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.755104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.755118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.755134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.755148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.755165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.755180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.755196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.755210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.755236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.755251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.755272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.755287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.755303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.755317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.755333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.755347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.755363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.755377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.755397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.755412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.755428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.755442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.755458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.755480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.755496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.755510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.755527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.755541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.755558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.755572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.755588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.755603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.755618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.755633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.755649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.755663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.755679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.755694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.755710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.765 [2024-05-15 15:46:51.755724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.765 [2024-05-15 15:46:51.755873] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17158f0 was disconnected and freed. reset controller. 00:28:38.765 [2024-05-15 15:46:51.756030] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:38.765 [2024-05-15 15:46:51.756141] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:38.765 [2024-05-15 15:46:51.756307] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:38.765 [2024-05-15 15:46:51.756401] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:38.765 [2024-05-15 15:46:51.756712] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1680c10 (9): Bad file descriptor 00:28:38.765 [2024-05-15 15:46:51.756747] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181a5f0 (9): Bad file descriptor 00:28:38.766 [2024-05-15 15:46:51.756797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1153610 (9): Bad file descriptor 00:28:38.766 [2024-05-15 15:46:51.756840] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1737370 (9): Bad file descriptor 00:28:38.766 [2024-05-15 15:46:51.756879] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:38.766 [2024-05-15 15:46:51.756904] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1741b50 (9): Bad file descriptor 00:28:38.766 [2024-05-15 15:46:51.756940] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173e030 (9): Bad file descriptor 00:28:38.766 [2024-05-15 15:46:51.756972] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x180af80 (9): Bad file descriptor 00:28:38.766 [2024-05-15 15:46:51.757004] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:38.766 [2024-05-15 15:46:51.757024] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:38.766 [2024-05-15 15:46:51.758349] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:38.766 [2024-05-15 15:46:51.758424] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:38.766 [2024-05-15 15:46:51.758547] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.766 [2024-05-15 15:46:51.758590] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:38.766 [2024-05-15 15:46:51.758609] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:38.766 [2024-05-15 15:46:51.758627] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:38.766 [2024-05-15 15:46:51.758647] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:38.766 [2024-05-15 15:46:51.758662] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:38.766 [2024-05-15 15:46:51.758675] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:38.766 [2024-05-15 15:46:51.758754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.758776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.758799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.758815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.758833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.758847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.758864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.758878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.758894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.758913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.758930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.758944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.758961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.758975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.758991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.759005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.759021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.759035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.759051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.759065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.759081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.759095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.759111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.759125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.759141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.759155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.759170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.759184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.759200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.759222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.759240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.759266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.759281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.759296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.759316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.759332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.759348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.759362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.759379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.759393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.759409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.759425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.759441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.759455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.759472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.759486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.759502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.759517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.759533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.759548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.759563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.759578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.766 [2024-05-15 15:46:51.759594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.766 [2024-05-15 15:46:51.759608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.759625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.759639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.759655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.759670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.759686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.759703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.759720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.759734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.759750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.759765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.759781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.759795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.759811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.759825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.759842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.759856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.759872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.759887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.759903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.759917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.759933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.759947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.759963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.759978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.759993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.760769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.760785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1716bf0 is same with the state(5) to be set 00:28:38.767 [2024-05-15 15:46:51.762041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.762065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.762088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.762104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.762121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.767 [2024-05-15 15:46:51.762136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.767 [2024-05-15 15:46:51.762156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.762975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.762990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.763007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.763020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.763037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.763051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.763067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.763082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.763098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.763112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.763128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.763142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.763158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.763172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.763188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.763202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.763224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.763241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.763268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.763282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.763297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.763311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.763327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.763345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.763362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.763376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.763392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.763406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.768 [2024-05-15 15:46:51.763421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.768 [2024-05-15 15:46:51.763440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.763456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.763470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.763486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.763500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.763516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.763530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.763546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.763561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.763577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.763592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.763608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.763622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.763638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.763655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.763671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.763685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.763701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.763715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.763734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.763749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.763764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.763779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.763795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.763808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.763824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.763838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.763854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.763868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.763884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.763898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.763914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.763928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.763944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.763958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.763974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.763987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.764003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.764017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.764033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.764048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.764062] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef10 is same with the state(5) to be set 00:28:38.769 [2024-05-15 15:46:51.765423] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:38.769 [2024-05-15 15:46:51.765464] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.769 [2024-05-15 15:46:51.765483] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.769 [2024-05-15 15:46:51.765504] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:38.769 [2024-05-15 15:46:51.765527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:38.769 [2024-05-15 15:46:51.765732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.769 [2024-05-15 15:46:51.765863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.769 [2024-05-15 15:46:51.765889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d660 with addr=10.0.0.2, port=4420 00:28:38.769 [2024-05-15 15:46:51.765907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165d660 is same with the state(5) to be set 00:28:38.769 [2024-05-15 15:46:51.766412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.769 [2024-05-15 15:46:51.766544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.769 [2024-05-15 15:46:51.766569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165b4f0 with addr=10.0.0.2, port=4420 00:28:38.769 [2024-05-15 15:46:51.766585] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165b4f0 is same with the state(5) to be set 00:28:38.769 [2024-05-15 15:46:51.766700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.769 [2024-05-15 15:46:51.766815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.769 [2024-05-15 15:46:51.766839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16887d0 with addr=10.0.0.2, port=4420 00:28:38.769 [2024-05-15 15:46:51.766855] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16887d0 is same with the state(5) to be set 00:28:38.769 [2024-05-15 15:46:51.766879] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165d660 (9): Bad file descriptor 00:28:38.769 [2024-05-15 15:46:51.767473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165b4f0 (9): Bad file descriptor 00:28:38.769 [2024-05-15 15:46:51.767501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16887d0 (9): Bad file descriptor 00:28:38.769 [2024-05-15 15:46:51.767518] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.769 [2024-05-15 15:46:51.767532] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.769 [2024-05-15 15:46:51.767550] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.769 [2024-05-15 15:46:51.767682] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.769 [2024-05-15 15:46:51.767734] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:38.769 [2024-05-15 15:46:51.767750] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:38.769 [2024-05-15 15:46:51.767764] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:38.769 [2024-05-15 15:46:51.767782] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:38.769 [2024-05-15 15:46:51.767796] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:38.769 [2024-05-15 15:46:51.767810] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:38.769 [2024-05-15 15:46:51.767890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.767913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.767937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.767958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.767976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.767991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.768007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.768022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.768038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.768053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.768069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.768084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.768099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.769 [2024-05-15 15:46:51.768114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.769 [2024-05-15 15:46:51.768130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.768975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.768989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.769005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.769019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.769035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.769050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.769066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.769080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.769096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.769109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.769129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.769143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.769159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.769173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.769189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.769204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.769225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.769242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.769263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.769277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.769293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.769307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.769323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.769337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.769353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.769367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.770 [2024-05-15 15:46:51.769383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.770 [2024-05-15 15:46:51.769397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.769413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.769427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.769444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.769468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.769485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.769499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.769515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.769533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.769549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.769564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.769580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.769594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.769611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.769625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.769642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.769656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.769672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.769686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.769702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.769716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.769732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.769746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.769762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.769776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.769792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.769805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.769821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.769836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.769852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.769865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.769881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.769895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.769914] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c18e0 is same with the state(5) to be set 00:28:38.771 [2024-05-15 15:46:51.771186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.771211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.771239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.771258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.771275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.771289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.771305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.771319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.771335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.771349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.771365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.771379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.771395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.771409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.771425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.771439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.771466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.771480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.771496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.771510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.771525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.771540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.771556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.771570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.771590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.771605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.771621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.771635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.771651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.771665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.771681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.771695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.771711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.771726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.771 [2024-05-15 15:46:51.771741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.771 [2024-05-15 15:46:51.771756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.771772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.771786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.771802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.771816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.771832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.771846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.771862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.771877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.771893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.771906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.771923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.771937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.771953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.771970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.771987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.772962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.772978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.773003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.773019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.772 [2024-05-15 15:46:51.773034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.772 [2024-05-15 15:46:51.773051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.773065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.773082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.773096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.773112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.773126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.773142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.773156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.773172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.773193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.773210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.773240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.773257] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1655600 is same with the state(5) to be set 00:28:38.773 [2024-05-15 15:46:51.774499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.774522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.774542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.774559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.774576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.774590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.774606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.774620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.774636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.774650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.774666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.774680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.774695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.774710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.774726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.774740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.774755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.774769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.774785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.774799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.774815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.774833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.774850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.774864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.774880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.774893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.774909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.774923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.774939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.774952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.774970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.774984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.775000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.775014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.775029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.775043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.775059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.775073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.775089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.775103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.775119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.775133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.775149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.775163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.775178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.775193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.775212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.775236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.775252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.775267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.775283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.775297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.775313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.775327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.775342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.775356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.775372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.775387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.775402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.775416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.775432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.775446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.775462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.775476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.775493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.775508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.773 [2024-05-15 15:46:51.775524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.773 [2024-05-15 15:46:51.775538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.775554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.775568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.775584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.775602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.775619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.775636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.775653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.775667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.775684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.775698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.775715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.775729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.775745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.775759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.775775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.775789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.775805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.775819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.775836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.775850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.775867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.775881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.775897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.775912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.775927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.775942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.775959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.775974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.775993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.776008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.776025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.776039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.776055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.776070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.776087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.776101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.776117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.776132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.776148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.776162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.776178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.776192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.776208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.776230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.776247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.776262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.776278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.776293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.783297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.783346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.783364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.783380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.783396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.783420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.783437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.783452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.783468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.783483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.783499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.783514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.783530] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656ae0 is same with the state(5) to be set 00:28:38.774 [2024-05-15 15:46:51.784871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.784897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.784924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.784940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.784957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.784972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.784989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.785004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.785021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.785035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.785051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.785065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.785083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.785097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.785113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.785127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.785144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.785164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.774 [2024-05-15 15:46:51.785180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.774 [2024-05-15 15:46:51.785195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.785972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.785988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.786002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.786018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.786032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.786047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.786061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.786077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.786092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.786108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.786122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.786137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.786151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.786167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.786181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.786197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.786211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.786234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.775 [2024-05-15 15:46:51.786249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.775 [2024-05-15 15:46:51.786266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.786280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.786295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.786310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.786330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.786346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.786362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.786377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.786392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.786407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.786423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.786437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.786453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.786467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.786482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.786496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.786512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.786526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.786541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.786556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.786572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.786586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.786602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.786616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.786632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.786646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.786662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.786675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.786691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.786709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.786725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.786739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.786755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.786769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.786785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.786799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.786815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.786830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.786845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.786859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.786874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657fc0 is same with the state(5) to be set 00:28:38.776 [2024-05-15 15:46:51.788101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.788124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.788145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.788161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.788178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.788192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.788208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.788229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.788246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.788261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.788277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.788291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.788307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.788326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.788343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.788357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.788373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.788388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.788403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.788418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.788434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.788447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.788464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.788478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.788493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.788508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.788524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.788538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.788555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.788569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.788586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.788600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.788616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.788630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.788646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.788660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.788676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.788690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.788706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.776 [2024-05-15 15:46:51.788724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.776 [2024-05-15 15:46:51.788740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.788755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.788770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.788784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.788800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.788814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.788830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.788844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.788860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.788874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.788890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.788904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.788919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.788933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.788948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.788962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.788980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.788994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.777 [2024-05-15 15:46:51.789980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.777 [2024-05-15 15:46:51.789996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.778 [2024-05-15 15:46:51.790010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.778 [2024-05-15 15:46:51.790026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.778 [2024-05-15 15:46:51.790040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.778 [2024-05-15 15:46:51.790056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.778 [2024-05-15 15:46:51.790070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.778 [2024-05-15 15:46:51.790084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16592a0 is same with the state(5) to be set 00:28:38.778 [2024-05-15 15:46:51.791703] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:38.778 [2024-05-15 15:46:51.791735] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:38.778 [2024-05-15 15:46:51.791755] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.778 [2024-05-15 15:46:51.791771] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.778 [2024-05-15 15:46:51.791785] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:38.778 [2024-05-15 15:46:51.791801] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:38.778 [2024-05-15 15:46:51.791919] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:38.778 [2024-05-15 15:46:51.791947] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:38.778 [2024-05-15 15:46:51.791967] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:38.778 [2024-05-15 15:46:51.792053] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:38.778 [2024-05-15 15:46:51.792078] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:38.778 task offset: 26880 on job bdev=Nvme4n1 fails 00:28:38.778 00:28:38.778 Latency(us) 00:28:38.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.778 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.778 Job: Nvme1n1 ended in about 1.11 seconds with error 00:28:38.778 Verification LBA range: start 0x0 length 0x400 00:28:38.778 Nvme1n1 : 1.11 176.83 11.05 54.13 0.00 274019.93 19612.25 273406.48 00:28:38.778 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.778 Job: Nvme2n1 ended in about 1.11 seconds with error 00:28:38.778 Verification LBA range: start 0x0 length 0x400 00:28:38.778 Nvme2n1 : 1.11 172.64 10.79 57.55 0.00 270856.34 23398.78 262532.36 00:28:38.778 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.778 Job: Nvme3n1 ended in about 1.12 seconds with error 00:28:38.778 Verification LBA range: start 0x0 length 0x400 00:28:38.778 Nvme3n1 : 1.12 172.14 10.76 57.38 0.00 267016.91 16990.81 270299.59 00:28:38.778 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.778 Job: Nvme4n1 ended in about 1.07 seconds with error 00:28:38.778 Verification LBA range: start 0x0 length 0x400 00:28:38.778 Nvme4n1 : 1.07 178.68 11.17 59.56 0.00 252271.22 5995.33 270299.59 00:28:38.778 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.778 Job: Nvme5n1 ended in about 1.12 seconds with error 00:28:38.778 Verification LBA range: start 0x0 length 0x400 00:28:38.778 Nvme5n1 : 1.12 114.16 7.13 57.08 0.00 346028.56 21845.33 309135.74 00:28:38.778 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.778 Job: Nvme6n1 ended in about 1.12 seconds with error 00:28:38.778 Verification LBA range: start 0x0 length 0x400 00:28:38.778 Nvme6n1 : 1.12 170.73 10.67 56.91 0.00 255572.76 20971.52 264085.81 00:28:38.778 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.778 Job: Nvme7n1 ended in about 1.13 seconds with error 00:28:38.778 Verification LBA range: start 0x0 length 0x400 00:28:38.778 Nvme7n1 : 1.13 169.19 10.57 56.40 0.00 253562.69 20097.71 240784.12 00:28:38.778 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.778 Job: Nvme8n1 ended in about 1.14 seconds with error 00:28:38.778 Verification LBA range: start 0x0 length 0x400 00:28:38.778 Nvme8n1 : 1.14 172.21 10.76 56.23 0.00 246010.94 18544.26 260978.92 00:28:38.778 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.778 Job: Nvme9n1 ended in about 1.14 seconds with error 00:28:38.778 Verification LBA range: start 0x0 length 0x400 00:28:38.778 Nvme9n1 : 1.14 173.48 10.84 56.07 0.00 240521.57 21359.88 239230.67 00:28:38.778 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.778 Job: Nvme10n1 ended in about 1.08 seconds with error 00:28:38.778 Verification LBA range: start 0x0 length 0x400 00:28:38.778 Nvme10n1 : 1.08 178.45 11.15 59.48 0.00 225286.73 8883.77 274959.93 00:28:38.778 =================================================================================================================== 00:28:38.778 Total : 1678.51 104.91 570.80 0.00 260915.86 5995.33 309135.74 00:28:38.778 [2024-05-15 15:46:51.817546] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:38.778 [2024-05-15 15:46:51.817633] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:38.778 [2024-05-15 15:46:51.817977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.778 [2024-05-15 15:46:51.818141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.778 [2024-05-15 15:46:51.818169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x181a5f0 with addr=10.0.0.2, port=4420 00:28:38.778 [2024-05-15 15:46:51.818190] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a5f0 is same with the state(5) to be set 00:28:38.778 [2024-05-15 15:46:51.818328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.778 [2024-05-15 15:46:51.818455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.778 [2024-05-15 15:46:51.818480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1680c10 with addr=10.0.0.2, port=4420 00:28:38.778 [2024-05-15 15:46:51.818497] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680c10 is same with the state(5) to be set 00:28:38.778 [2024-05-15 15:46:51.818627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.778 [2024-05-15 15:46:51.818744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.778 [2024-05-15 15:46:51.818769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1741b50 with addr=10.0.0.2, port=4420 00:28:38.778 [2024-05-15 15:46:51.818785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b50 is same with the state(5) to be set 00:28:38.778 [2024-05-15 15:46:51.818888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.778 [2024-05-15 15:46:51.819095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.778 [2024-05-15 15:46:51.819120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1737370 with addr=10.0.0.2, port=4420 00:28:38.778 [2024-05-15 15:46:51.819136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1737370 is same with the state(5) to be set 00:28:38.778 [2024-05-15 15:46:51.820527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.778 [2024-05-15 15:46:51.820557] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:38.778 [2024-05-15 15:46:51.820720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.778 [2024-05-15 15:46:51.820837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.778 [2024-05-15 15:46:51.820862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1153610 with addr=10.0.0.2, port=4420 00:28:38.778 [2024-05-15 15:46:51.820879] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1153610 is same with the state(5) to be set 00:28:38.778 [2024-05-15 15:46:51.820981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.778 [2024-05-15 15:46:51.821114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.778 [2024-05-15 15:46:51.821138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173e030 with addr=10.0.0.2, port=4420 00:28:38.778 [2024-05-15 15:46:51.821154] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173e030 is same with the state(5) to be set 00:28:38.778 [2024-05-15 15:46:51.821263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.778 [2024-05-15 15:46:51.821367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.778 [2024-05-15 15:46:51.821392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x180af80 with addr=10.0.0.2, port=4420 00:28:38.778 [2024-05-15 15:46:51.821408] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180af80 is same with the state(5) to be set 00:28:38.778 [2024-05-15 15:46:51.821434] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181a5f0 (9): Bad file descriptor 00:28:38.778 [2024-05-15 15:46:51.821458] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1680c10 (9): Bad file descriptor 00:28:38.778 [2024-05-15 15:46:51.821476] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1741b50 (9): Bad file descriptor 00:28:38.778 [2024-05-15 15:46:51.821494] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1737370 (9): Bad file descriptor 00:28:38.778 [2024-05-15 15:46:51.821549] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:38.778 [2024-05-15 15:46:51.821577] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:38.778 [2024-05-15 15:46:51.821602] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:38.778 [2024-05-15 15:46:51.821626] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:38.778 [2024-05-15 15:46:51.821647] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:38.778 [2024-05-15 15:46:51.821715] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:38.778 [2024-05-15 15:46:51.821937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.778 [2024-05-15 15:46:51.822041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.778 [2024-05-15 15:46:51.822067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d660 with addr=10.0.0.2, port=4420 00:28:38.778 [2024-05-15 15:46:51.822084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165d660 is same with the state(5) to be set 00:28:38.778 [2024-05-15 15:46:51.822195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.778 [2024-05-15 15:46:51.822318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.778 [2024-05-15 15:46:51.822345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16887d0 with addr=10.0.0.2, port=4420 00:28:38.779 [2024-05-15 15:46:51.822361] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16887d0 is same with the state(5) to be set 00:28:38.779 [2024-05-15 15:46:51.822380] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1153610 (9): Bad file descriptor 00:28:38.779 [2024-05-15 15:46:51.822399] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173e030 (9): Bad file descriptor 00:28:38.779 [2024-05-15 15:46:51.822418] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x180af80 (9): Bad file descriptor 00:28:38.779 [2024-05-15 15:46:51.822435] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:38.779 [2024-05-15 15:46:51.822448] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:38.779 [2024-05-15 15:46:51.822465] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:38.779 [2024-05-15 15:46:51.822484] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:38.779 [2024-05-15 15:46:51.822498] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:38.779 [2024-05-15 15:46:51.822512] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:38.779 [2024-05-15 15:46:51.822528] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:38.779 [2024-05-15 15:46:51.822541] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:38.779 [2024-05-15 15:46:51.822554] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:38.779 [2024-05-15 15:46:51.822570] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:38.779 [2024-05-15 15:46:51.822583] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:38.779 [2024-05-15 15:46:51.822596] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:38.779 [2024-05-15 15:46:51.822680] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.779 [2024-05-15 15:46:51.822700] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.779 [2024-05-15 15:46:51.822713] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.779 [2024-05-15 15:46:51.822725] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.779 [2024-05-15 15:46:51.822836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.779 [2024-05-15 15:46:51.822943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.779 [2024-05-15 15:46:51.822968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165b4f0 with addr=10.0.0.2, port=4420 00:28:38.779 [2024-05-15 15:46:51.822984] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165b4f0 is same with the state(5) to be set 00:28:38.779 [2024-05-15 15:46:51.823003] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165d660 (9): Bad file descriptor 00:28:38.779 [2024-05-15 15:46:51.823022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16887d0 (9): Bad file descriptor 00:28:38.779 [2024-05-15 15:46:51.823039] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:38.779 [2024-05-15 15:46:51.823052] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:38.779 [2024-05-15 15:46:51.823066] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:38.779 [2024-05-15 15:46:51.823083] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:38.779 [2024-05-15 15:46:51.823097] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:38.779 [2024-05-15 15:46:51.823111] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:38.779 [2024-05-15 15:46:51.823126] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:38.779 [2024-05-15 15:46:51.823140] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:38.779 [2024-05-15 15:46:51.823153] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:38.779 [2024-05-15 15:46:51.823190] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.779 [2024-05-15 15:46:51.823209] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.779 [2024-05-15 15:46:51.823231] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.779 [2024-05-15 15:46:51.823248] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165b4f0 (9): Bad file descriptor 00:28:38.779 [2024-05-15 15:46:51.823265] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.779 [2024-05-15 15:46:51.823279] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.779 [2024-05-15 15:46:51.823292] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.779 [2024-05-15 15:46:51.823309] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:38.779 [2024-05-15 15:46:51.823323] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:38.779 [2024-05-15 15:46:51.823336] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:38.779 [2024-05-15 15:46:51.823376] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.779 [2024-05-15 15:46:51.823394] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.779 [2024-05-15 15:46:51.823407] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:38.779 [2024-05-15 15:46:51.823420] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:38.779 [2024-05-15 15:46:51.823433] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:38.779 [2024-05-15 15:46:51.823472] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.345 15:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:28:39.345 15:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:28:40.279 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1403538 00:28:40.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1403538) - No such process 00:28:40.279 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:28:40.279 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:28:40.279 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:40.279 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:40.279 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:40.279 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:40.279 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:40.279 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:40.279 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:40.279 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:40.279 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:40.279 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:40.279 rmmod nvme_tcp 00:28:40.279 rmmod nvme_fabrics 00:28:40.279 rmmod nvme_keyring 00:28:40.279 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:40.279 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:40.280 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:40.280 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:40.280 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:40.280 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:40.280 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:40.280 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:40.280 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:40.280 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.280 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:40.280 15:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.812 15:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:42.812 00:28:42.812 real 0m7.475s 00:28:42.812 user 0m18.193s 00:28:42.812 sys 0m1.499s 00:28:42.812 15:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:42.812 15:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:42.812 ************************************ 00:28:42.812 END TEST nvmf_shutdown_tc3 00:28:42.812 ************************************ 00:28:42.812 15:46:55 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:28:42.812 00:28:42.812 real 0m27.343s 00:28:42.812 user 1m14.272s 00:28:42.812 sys 0m6.724s 00:28:42.812 15:46:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:42.812 15:46:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:42.812 ************************************ 00:28:42.812 END TEST nvmf_shutdown 00:28:42.812 ************************************ 00:28:42.812 15:46:55 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:28:42.812 15:46:55 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:42.812 15:46:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:42.812 15:46:55 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:28:42.812 15:46:55 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:42.812 15:46:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:42.812 15:46:55 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:28:42.812 15:46:55 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:42.812 15:46:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:42.812 15:46:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:42.812 15:46:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:42.812 ************************************ 00:28:42.812 START TEST nvmf_multicontroller 00:28:42.812 ************************************ 00:28:42.812 15:46:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:42.812 * Looking for test storage... 00:28:42.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:42.812 15:46:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:42.812 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:42.812 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:42.812 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:42.812 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:42.812 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:42.812 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:42.812 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:42.812 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:42.812 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:42.812 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:42.812 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:42.812 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:42.812 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:42.812 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:42.812 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:42.812 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:42.812 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:42.812 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:42.812 15:46:55 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:42.812 15:46:55 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:42.812 15:46:55 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:42.812 15:46:55 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:28:42.813 15:46:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:45.369 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:45.369 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:45.369 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:45.370 Found net devices under 0000:09:00.0: cvl_0_0 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:45.370 Found net devices under 0000:09:00.1: cvl_0_1 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:45.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:45.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:28:45.370 00:28:45.370 --- 10.0.0.2 ping statistics --- 00:28:45.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.370 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:45.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:45.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:28:45.370 00:28:45.370 --- 10.0.0.1 ping statistics --- 00:28:45.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.370 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1406350 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1406350 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 1406350 ']' 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:45.370 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.370 [2024-05-15 15:46:58.204770] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:28:45.370 [2024-05-15 15:46:58.204850] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.370 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.370 [2024-05-15 15:46:58.247285] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:45.370 [2024-05-15 15:46:58.281782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:45.370 [2024-05-15 15:46:58.373882] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.370 [2024-05-15 15:46:58.373938] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.370 [2024-05-15 15:46:58.373954] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.370 [2024-05-15 15:46:58.373968] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.370 [2024-05-15 15:46:58.373980] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.370 [2024-05-15 15:46:58.374425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:45.370 [2024-05-15 15:46:58.374461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:45.370 [2024-05-15 15:46:58.374464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.627 [2024-05-15 15:46:58.507109] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.627 Malloc0 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.627 [2024-05-15 15:46:58.577989] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:45.627 [2024-05-15 15:46:58.578323] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.627 [2024-05-15 15:46:58.586118] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.627 Malloc1 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1406496 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1406496 /var/tmp/bdevperf.sock 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 1406496 ']' 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:45.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:45.627 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.884 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:45.884 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:28:45.885 15:46:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:45.885 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.885 15:46:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:46.143 NVMe0n1 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.143 1 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:46.143 request: 00:28:46.143 { 00:28:46.143 "name": "NVMe0", 00:28:46.143 "trtype": "tcp", 00:28:46.143 "traddr": "10.0.0.2", 00:28:46.143 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:46.143 "hostaddr": "10.0.0.2", 00:28:46.143 "hostsvcid": "60000", 00:28:46.143 "adrfam": "ipv4", 00:28:46.143 "trsvcid": "4420", 00:28:46.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:46.143 "method": "bdev_nvme_attach_controller", 00:28:46.143 "req_id": 1 00:28:46.143 } 00:28:46.143 Got JSON-RPC error response 00:28:46.143 response: 00:28:46.143 { 00:28:46.143 "code": -114, 00:28:46.143 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:46.143 } 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:46.143 request: 00:28:46.143 { 00:28:46.143 "name": "NVMe0", 00:28:46.143 "trtype": "tcp", 00:28:46.143 "traddr": "10.0.0.2", 00:28:46.143 "hostaddr": "10.0.0.2", 00:28:46.143 "hostsvcid": "60000", 00:28:46.143 "adrfam": "ipv4", 00:28:46.143 "trsvcid": "4420", 00:28:46.143 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:46.143 "method": "bdev_nvme_attach_controller", 00:28:46.143 "req_id": 1 00:28:46.143 } 00:28:46.143 Got JSON-RPC error response 00:28:46.143 response: 00:28:46.143 { 00:28:46.143 "code": -114, 00:28:46.143 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:46.143 } 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.143 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:46.143 request: 00:28:46.144 { 00:28:46.144 "name": "NVMe0", 00:28:46.144 "trtype": "tcp", 00:28:46.144 "traddr": "10.0.0.2", 00:28:46.144 "hostaddr": "10.0.0.2", 00:28:46.144 "hostsvcid": "60000", 00:28:46.144 "adrfam": "ipv4", 00:28:46.144 "trsvcid": "4420", 00:28:46.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:46.144 "multipath": "disable", 00:28:46.144 "method": "bdev_nvme_attach_controller", 00:28:46.144 "req_id": 1 00:28:46.144 } 00:28:46.144 Got JSON-RPC error response 00:28:46.144 response: 00:28:46.144 { 00:28:46.144 "code": -114, 00:28:46.144 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:46.144 } 00:28:46.144 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:46.144 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:46.144 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:46.144 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:46.144 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:46.144 15:46:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:46.144 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:46.144 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:46.144 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:46.144 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:46.144 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:46.144 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:46.144 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:46.144 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.144 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:46.144 request: 00:28:46.144 { 00:28:46.144 "name": "NVMe0", 00:28:46.144 "trtype": "tcp", 00:28:46.144 "traddr": "10.0.0.2", 00:28:46.144 "hostaddr": "10.0.0.2", 00:28:46.144 "hostsvcid": "60000", 00:28:46.144 "adrfam": "ipv4", 00:28:46.144 "trsvcid": "4420", 00:28:46.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:46.144 "multipath": "failover", 00:28:46.144 "method": "bdev_nvme_attach_controller", 00:28:46.144 "req_id": 1 00:28:46.144 } 00:28:46.144 Got JSON-RPC error response 00:28:46.144 response: 00:28:46.144 { 00:28:46.144 "code": -114, 00:28:46.144 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:46.144 } 00:28:46.144 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:46.144 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:46.144 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:46.144 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:46.144 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:46.144 15:46:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:46.144 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.144 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:46.401 00:28:46.401 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.401 15:46:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:46.401 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.401 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:46.401 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.401 15:46:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:46.401 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.401 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:46.401 00:28:46.401 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.401 15:46:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:46.401 15:46:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:46.401 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.401 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:46.401 15:46:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.401 15:46:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:46.401 15:46:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:47.773 0 00:28:47.773 15:47:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:47.773 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1406496 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 1406496 ']' 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 1406496 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1406496 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1406496' 00:28:47.774 killing process with pid 1406496 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 1406496 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 1406496 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:28:47.774 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:47.774 [2024-05-15 15:46:58.689488] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:28:47.774 [2024-05-15 15:46:58.689576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1406496 ] 00:28:47.774 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.774 [2024-05-15 15:46:58.724972] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:47.774 [2024-05-15 15:46:58.758245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.774 [2024-05-15 15:46:58.842923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.774 [2024-05-15 15:46:59.367009] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name e5635a91-8139-44eb-b82f-359d282f638f already exists 00:28:47.774 [2024-05-15 15:46:59.367052] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:e5635a91-8139-44eb-b82f-359d282f638f alias for bdev NVMe1n1 00:28:47.774 [2024-05-15 15:46:59.367084] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:47.774 Running I/O for 1 seconds... 00:28:47.774 00:28:47.774 Latency(us) 00:28:47.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.774 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:47.774 NVMe0n1 : 1.01 19055.43 74.44 0.00 0.00 6706.48 5315.70 12718.84 00:28:47.774 =================================================================================================================== 00:28:47.774 Total : 19055.43 74.44 0.00 0.00 6706.48 5315.70 12718.84 00:28:47.774 Received shutdown signal, test time was about 1.000000 seconds 00:28:47.774 00:28:47.774 Latency(us) 00:28:47.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.774 =================================================================================================================== 00:28:47.774 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:47.774 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:47.774 rmmod nvme_tcp 00:28:47.774 rmmod nvme_fabrics 00:28:47.774 rmmod nvme_keyring 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1406350 ']' 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1406350 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 1406350 ']' 00:28:47.774 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 1406350 00:28:48.032 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:48.032 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:48.032 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1406350 00:28:48.032 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:48.032 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:48.032 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1406350' 00:28:48.032 killing process with pid 1406350 00:28:48.032 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 1406350 00:28:48.032 [2024-05-15 15:47:00.907395] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:48.032 15:47:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 1406350 00:28:48.290 15:47:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:48.290 15:47:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:48.290 15:47:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:48.290 15:47:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:48.290 15:47:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:48.290 15:47:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.290 15:47:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:48.290 15:47:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.187 15:47:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:50.187 00:28:50.187 real 0m7.798s 00:28:50.187 user 0m11.410s 00:28:50.187 sys 0m2.610s 00:28:50.187 15:47:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:50.187 15:47:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:50.187 ************************************ 00:28:50.187 END TEST nvmf_multicontroller 00:28:50.187 ************************************ 00:28:50.187 15:47:03 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:50.187 15:47:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:50.187 15:47:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:50.187 15:47:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:50.445 ************************************ 00:28:50.445 START TEST nvmf_aer 00:28:50.445 ************************************ 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:50.445 * Looking for test storage... 00:28:50.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:50.445 15:47:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:52.971 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:52.971 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:52.971 Found net devices under 0000:09:00.0: cvl_0_0 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:52.971 Found net devices under 0000:09:00.1: cvl_0_1 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:52.971 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:52.972 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:52.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:52.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:28:52.972 00:28:52.972 --- 10.0.0.2 ping statistics --- 00:28:52.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.972 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:28:52.972 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:52.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:52.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:28:52.972 00:28:52.972 --- 10.0.0.1 ping statistics --- 00:28:52.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.972 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:28:52.972 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:52.972 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:52.972 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:52.972 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:52.972 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:52.972 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:52.972 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:52.972 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:52.972 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:52.972 15:47:05 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:52.972 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:52.972 15:47:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:52.972 15:47:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:52.972 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1408991 00:28:52.972 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:52.972 15:47:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1408991 00:28:52.972 15:47:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 1408991 ']' 00:28:52.972 15:47:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.972 15:47:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:52.972 15:47:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.972 15:47:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:52.972 15:47:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:52.972 [2024-05-15 15:47:05.978135] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:28:52.972 [2024-05-15 15:47:05.978230] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.972 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.972 [2024-05-15 15:47:06.021373] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:52.972 [2024-05-15 15:47:06.052591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:53.230 [2024-05-15 15:47:06.135752] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:53.230 [2024-05-15 15:47:06.135806] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:53.230 [2024-05-15 15:47:06.135834] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:53.230 [2024-05-15 15:47:06.135844] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:53.230 [2024-05-15 15:47:06.135854] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:53.230 [2024-05-15 15:47:06.135934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.230 [2024-05-15 15:47:06.136001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:53.230 [2024-05-15 15:47:06.136067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:53.230 [2024-05-15 15:47:06.136069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.230 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:53.230 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:28:53.230 15:47:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:53.230 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:53.230 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.230 15:47:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:53.230 15:47:06 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:53.230 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.230 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.230 [2024-05-15 15:47:06.286809] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:53.230 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.230 15:47:06 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:53.230 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.230 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.230 Malloc0 00:28:53.230 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.230 15:47:06 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:53.230 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.230 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.230 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.230 15:47:06 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:53.230 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.230 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.487 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.487 15:47:06 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:53.487 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.487 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.487 [2024-05-15 15:47:06.337523] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:53.487 [2024-05-15 15:47:06.337853] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:53.487 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.487 15:47:06 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:53.487 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.487 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.487 [ 00:28:53.487 { 00:28:53.487 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:53.487 "subtype": "Discovery", 00:28:53.487 "listen_addresses": [], 00:28:53.487 "allow_any_host": true, 00:28:53.487 "hosts": [] 00:28:53.487 }, 00:28:53.487 { 00:28:53.487 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:53.487 "subtype": "NVMe", 00:28:53.487 "listen_addresses": [ 00:28:53.487 { 00:28:53.487 "trtype": "TCP", 00:28:53.487 "adrfam": "IPv4", 00:28:53.487 "traddr": "10.0.0.2", 00:28:53.487 "trsvcid": "4420" 00:28:53.487 } 00:28:53.487 ], 00:28:53.487 "allow_any_host": true, 00:28:53.487 "hosts": [], 00:28:53.487 "serial_number": "SPDK00000000000001", 00:28:53.487 "model_number": "SPDK bdev Controller", 00:28:53.487 "max_namespaces": 2, 00:28:53.487 "min_cntlid": 1, 00:28:53.487 "max_cntlid": 65519, 00:28:53.487 "namespaces": [ 00:28:53.487 { 00:28:53.487 "nsid": 1, 00:28:53.487 "bdev_name": "Malloc0", 00:28:53.487 "name": "Malloc0", 00:28:53.488 "nguid": "59E1BE0F65394F1095C0AF7C9985D1E8", 00:28:53.488 "uuid": "59e1be0f-6539-4f10-95c0-af7c9985d1e8" 00:28:53.488 } 00:28:53.488 ] 00:28:53.488 } 00:28:53.488 ] 00:28:53.488 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.488 15:47:06 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:53.488 15:47:06 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:53.488 15:47:06 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1409134 00:28:53.488 15:47:06 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:53.488 15:47:06 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:53.488 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:28:53.488 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:53.488 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:28:53.488 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:28:53.488 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:53.488 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.488 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:53.488 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:28:53.488 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:28:53.488 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:53.488 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:53.488 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 2 -lt 200 ']' 00:28:53.488 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=3 00:28:53.488 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.746 Malloc1 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.746 Asynchronous Event Request test 00:28:53.746 Attaching to 10.0.0.2 00:28:53.746 Attached to 10.0.0.2 00:28:53.746 Registering asynchronous event callbacks... 00:28:53.746 Starting namespace attribute notice tests for all controllers... 00:28:53.746 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:53.746 aer_cb - Changed Namespace 00:28:53.746 Cleaning up... 00:28:53.746 [ 00:28:53.746 { 00:28:53.746 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:53.746 "subtype": "Discovery", 00:28:53.746 "listen_addresses": [], 00:28:53.746 "allow_any_host": true, 00:28:53.746 "hosts": [] 00:28:53.746 }, 00:28:53.746 { 00:28:53.746 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:53.746 "subtype": "NVMe", 00:28:53.746 "listen_addresses": [ 00:28:53.746 { 00:28:53.746 "trtype": "TCP", 00:28:53.746 "adrfam": "IPv4", 00:28:53.746 "traddr": "10.0.0.2", 00:28:53.746 "trsvcid": "4420" 00:28:53.746 } 00:28:53.746 ], 00:28:53.746 "allow_any_host": true, 00:28:53.746 "hosts": [], 00:28:53.746 "serial_number": "SPDK00000000000001", 00:28:53.746 "model_number": "SPDK bdev Controller", 00:28:53.746 "max_namespaces": 2, 00:28:53.746 "min_cntlid": 1, 00:28:53.746 "max_cntlid": 65519, 00:28:53.746 "namespaces": [ 00:28:53.746 { 00:28:53.746 "nsid": 1, 00:28:53.746 "bdev_name": "Malloc0", 00:28:53.746 "name": "Malloc0", 00:28:53.746 "nguid": "59E1BE0F65394F1095C0AF7C9985D1E8", 00:28:53.746 "uuid": "59e1be0f-6539-4f10-95c0-af7c9985d1e8" 00:28:53.746 }, 00:28:53.746 { 00:28:53.746 "nsid": 2, 00:28:53.746 "bdev_name": "Malloc1", 00:28:53.746 "name": "Malloc1", 00:28:53.746 "nguid": "FC965F377F8C491F86DEB7F9F73A22D6", 00:28:53.746 "uuid": "fc965f37-7f8c-491f-86de-b7f9f73a22d6" 00:28:53.746 } 00:28:53.746 ] 00:28:53.746 } 00:28:53.746 ] 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1409134 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:53.746 15:47:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:53.746 rmmod nvme_tcp 00:28:53.746 rmmod nvme_fabrics 00:28:54.004 rmmod nvme_keyring 00:28:54.004 15:47:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:54.004 15:47:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:54.004 15:47:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:54.004 15:47:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1408991 ']' 00:28:54.004 15:47:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1408991 00:28:54.004 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 1408991 ']' 00:28:54.004 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 1408991 00:28:54.004 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:28:54.004 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:54.004 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1408991 00:28:54.004 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:54.004 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:54.004 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1408991' 00:28:54.004 killing process with pid 1408991 00:28:54.004 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 1408991 00:28:54.004 [2024-05-15 15:47:06.906739] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:54.004 15:47:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 1408991 00:28:54.262 15:47:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:54.262 15:47:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:54.262 15:47:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:54.262 15:47:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:54.262 15:47:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:54.262 15:47:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.262 15:47:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:54.262 15:47:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.161 15:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:56.161 00:28:56.161 real 0m5.888s 00:28:56.161 user 0m4.746s 00:28:56.161 sys 0m2.215s 00:28:56.161 15:47:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:56.161 15:47:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:56.161 ************************************ 00:28:56.161 END TEST nvmf_aer 00:28:56.161 ************************************ 00:28:56.161 15:47:09 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:56.161 15:47:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:56.161 15:47:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:56.161 15:47:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:56.161 ************************************ 00:28:56.161 START TEST nvmf_async_init 00:28:56.161 ************************************ 00:28:56.161 15:47:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:56.419 * Looking for test storage... 00:28:56.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=942220a520d44b278be5198cb463735a 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:56.419 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:56.420 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.420 15:47:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:56.420 15:47:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.420 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:56.420 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:56.420 15:47:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:56.420 15:47:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:58.947 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:58.947 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:58.947 Found net devices under 0000:09:00.0: cvl_0_0 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:58.947 Found net devices under 0000:09:00.1: cvl_0_1 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:58.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:58.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:28:58.947 00:28:58.947 --- 10.0.0.2 ping statistics --- 00:28:58.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.947 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:58.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:58.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:28:58.947 00:28:58.947 --- 10.0.0.1 ping statistics --- 00:28:58.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.947 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1411370 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1411370 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 1411370 ']' 00:28:58.947 15:47:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.948 15:47:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:58.948 15:47:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.948 15:47:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:58.948 15:47:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.948 [2024-05-15 15:47:11.902463] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:28:58.948 [2024-05-15 15:47:11.902553] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.948 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.948 [2024-05-15 15:47:11.946500] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:58.948 [2024-05-15 15:47:11.979375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.206 [2024-05-15 15:47:12.064714] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.206 [2024-05-15 15:47:12.064760] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.206 [2024-05-15 15:47:12.064789] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.206 [2024-05-15 15:47:12.064801] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.206 [2024-05-15 15:47:12.064811] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.206 [2024-05-15 15:47:12.064842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.206 [2024-05-15 15:47:12.196436] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.206 null0 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 942220a520d44b278be5198cb463735a 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.206 [2024-05-15 15:47:12.236491] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:59.206 [2024-05-15 15:47:12.236765] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.206 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.464 nvme0n1 00:28:59.464 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.464 15:47:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:59.464 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.464 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.464 [ 00:28:59.464 { 00:28:59.464 "name": "nvme0n1", 00:28:59.464 "aliases": [ 00:28:59.464 "942220a5-20d4-4b27-8be5-198cb463735a" 00:28:59.464 ], 00:28:59.465 "product_name": "NVMe disk", 00:28:59.465 "block_size": 512, 00:28:59.465 "num_blocks": 2097152, 00:28:59.465 "uuid": "942220a5-20d4-4b27-8be5-198cb463735a", 00:28:59.465 "assigned_rate_limits": { 00:28:59.465 "rw_ios_per_sec": 0, 00:28:59.465 "rw_mbytes_per_sec": 0, 00:28:59.465 "r_mbytes_per_sec": 0, 00:28:59.465 "w_mbytes_per_sec": 0 00:28:59.465 }, 00:28:59.465 "claimed": false, 00:28:59.465 "zoned": false, 00:28:59.465 "supported_io_types": { 00:28:59.465 "read": true, 00:28:59.465 "write": true, 00:28:59.465 "unmap": false, 00:28:59.465 "write_zeroes": true, 00:28:59.465 "flush": true, 00:28:59.465 "reset": true, 00:28:59.465 "compare": true, 00:28:59.465 "compare_and_write": true, 00:28:59.465 "abort": true, 00:28:59.465 "nvme_admin": true, 00:28:59.465 "nvme_io": true 00:28:59.465 }, 00:28:59.465 "memory_domains": [ 00:28:59.465 { 00:28:59.465 "dma_device_id": "system", 00:28:59.465 "dma_device_type": 1 00:28:59.465 } 00:28:59.465 ], 00:28:59.465 "driver_specific": { 00:28:59.465 "nvme": [ 00:28:59.465 { 00:28:59.465 "trid": { 00:28:59.465 "trtype": "TCP", 00:28:59.465 "adrfam": "IPv4", 00:28:59.465 "traddr": "10.0.0.2", 00:28:59.465 "trsvcid": "4420", 00:28:59.465 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:59.465 }, 00:28:59.465 "ctrlr_data": { 00:28:59.465 "cntlid": 1, 00:28:59.465 "vendor_id": "0x8086", 00:28:59.465 "model_number": "SPDK bdev Controller", 00:28:59.465 "serial_number": "00000000000000000000", 00:28:59.465 "firmware_revision": "24.05", 00:28:59.465 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:59.465 "oacs": { 00:28:59.465 "security": 0, 00:28:59.465 "format": 0, 00:28:59.465 "firmware": 0, 00:28:59.465 "ns_manage": 0 00:28:59.465 }, 00:28:59.465 "multi_ctrlr": true, 00:28:59.465 "ana_reporting": false 00:28:59.465 }, 00:28:59.465 "vs": { 00:28:59.465 "nvme_version": "1.3" 00:28:59.465 }, 00:28:59.465 "ns_data": { 00:28:59.465 "id": 1, 00:28:59.465 "can_share": true 00:28:59.465 } 00:28:59.465 } 00:28:59.465 ], 00:28:59.465 "mp_policy": "active_passive" 00:28:59.465 } 00:28:59.465 } 00:28:59.465 ] 00:28:59.465 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.465 15:47:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:59.465 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.465 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.465 [2024-05-15 15:47:12.489361] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:59.465 [2024-05-15 15:47:12.489449] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25275c0 (9): Bad file descriptor 00:28:59.722 [2024-05-15 15:47:12.631363] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:59.722 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.722 15:47:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:59.722 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.722 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.722 [ 00:28:59.722 { 00:28:59.722 "name": "nvme0n1", 00:28:59.722 "aliases": [ 00:28:59.722 "942220a5-20d4-4b27-8be5-198cb463735a" 00:28:59.722 ], 00:28:59.722 "product_name": "NVMe disk", 00:28:59.722 "block_size": 512, 00:28:59.722 "num_blocks": 2097152, 00:28:59.722 "uuid": "942220a5-20d4-4b27-8be5-198cb463735a", 00:28:59.722 "assigned_rate_limits": { 00:28:59.722 "rw_ios_per_sec": 0, 00:28:59.722 "rw_mbytes_per_sec": 0, 00:28:59.722 "r_mbytes_per_sec": 0, 00:28:59.722 "w_mbytes_per_sec": 0 00:28:59.722 }, 00:28:59.722 "claimed": false, 00:28:59.722 "zoned": false, 00:28:59.722 "supported_io_types": { 00:28:59.722 "read": true, 00:28:59.722 "write": true, 00:28:59.722 "unmap": false, 00:28:59.722 "write_zeroes": true, 00:28:59.722 "flush": true, 00:28:59.722 "reset": true, 00:28:59.722 "compare": true, 00:28:59.722 "compare_and_write": true, 00:28:59.722 "abort": true, 00:28:59.722 "nvme_admin": true, 00:28:59.722 "nvme_io": true 00:28:59.722 }, 00:28:59.722 "memory_domains": [ 00:28:59.722 { 00:28:59.722 "dma_device_id": "system", 00:28:59.722 "dma_device_type": 1 00:28:59.722 } 00:28:59.723 ], 00:28:59.723 "driver_specific": { 00:28:59.723 "nvme": [ 00:28:59.723 { 00:28:59.723 "trid": { 00:28:59.723 "trtype": "TCP", 00:28:59.723 "adrfam": "IPv4", 00:28:59.723 "traddr": "10.0.0.2", 00:28:59.723 "trsvcid": "4420", 00:28:59.723 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:59.723 }, 00:28:59.723 "ctrlr_data": { 00:28:59.723 "cntlid": 2, 00:28:59.723 "vendor_id": "0x8086", 00:28:59.723 "model_number": "SPDK bdev Controller", 00:28:59.723 "serial_number": "00000000000000000000", 00:28:59.723 "firmware_revision": "24.05", 00:28:59.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:59.723 "oacs": { 00:28:59.723 "security": 0, 00:28:59.723 "format": 0, 00:28:59.723 "firmware": 0, 00:28:59.723 "ns_manage": 0 00:28:59.723 }, 00:28:59.723 "multi_ctrlr": true, 00:28:59.723 "ana_reporting": false 00:28:59.723 }, 00:28:59.723 "vs": { 00:28:59.723 "nvme_version": "1.3" 00:28:59.723 }, 00:28:59.723 "ns_data": { 00:28:59.723 "id": 1, 00:28:59.723 "can_share": true 00:28:59.723 } 00:28:59.723 } 00:28:59.723 ], 00:28:59.723 "mp_policy": "active_passive" 00:28:59.723 } 00:28:59.723 } 00:28:59.723 ] 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.kMTNpfWaI8 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.kMTNpfWaI8 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.723 [2024-05-15 15:47:12.681991] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:59.723 [2024-05-15 15:47:12.682120] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kMTNpfWaI8 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.723 [2024-05-15 15:47:12.690013] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kMTNpfWaI8 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.723 [2024-05-15 15:47:12.698029] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:59.723 [2024-05-15 15:47:12.698090] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:59.723 nvme0n1 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.723 [ 00:28:59.723 { 00:28:59.723 "name": "nvme0n1", 00:28:59.723 "aliases": [ 00:28:59.723 "942220a5-20d4-4b27-8be5-198cb463735a" 00:28:59.723 ], 00:28:59.723 "product_name": "NVMe disk", 00:28:59.723 "block_size": 512, 00:28:59.723 "num_blocks": 2097152, 00:28:59.723 "uuid": "942220a5-20d4-4b27-8be5-198cb463735a", 00:28:59.723 "assigned_rate_limits": { 00:28:59.723 "rw_ios_per_sec": 0, 00:28:59.723 "rw_mbytes_per_sec": 0, 00:28:59.723 "r_mbytes_per_sec": 0, 00:28:59.723 "w_mbytes_per_sec": 0 00:28:59.723 }, 00:28:59.723 "claimed": false, 00:28:59.723 "zoned": false, 00:28:59.723 "supported_io_types": { 00:28:59.723 "read": true, 00:28:59.723 "write": true, 00:28:59.723 "unmap": false, 00:28:59.723 "write_zeroes": true, 00:28:59.723 "flush": true, 00:28:59.723 "reset": true, 00:28:59.723 "compare": true, 00:28:59.723 "compare_and_write": true, 00:28:59.723 "abort": true, 00:28:59.723 "nvme_admin": true, 00:28:59.723 "nvme_io": true 00:28:59.723 }, 00:28:59.723 "memory_domains": [ 00:28:59.723 { 00:28:59.723 "dma_device_id": "system", 00:28:59.723 "dma_device_type": 1 00:28:59.723 } 00:28:59.723 ], 00:28:59.723 "driver_specific": { 00:28:59.723 "nvme": [ 00:28:59.723 { 00:28:59.723 "trid": { 00:28:59.723 "trtype": "TCP", 00:28:59.723 "adrfam": "IPv4", 00:28:59.723 "traddr": "10.0.0.2", 00:28:59.723 "trsvcid": "4421", 00:28:59.723 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:59.723 }, 00:28:59.723 "ctrlr_data": { 00:28:59.723 "cntlid": 3, 00:28:59.723 "vendor_id": "0x8086", 00:28:59.723 "model_number": "SPDK bdev Controller", 00:28:59.723 "serial_number": "00000000000000000000", 00:28:59.723 "firmware_revision": "24.05", 00:28:59.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:59.723 "oacs": { 00:28:59.723 "security": 0, 00:28:59.723 "format": 0, 00:28:59.723 "firmware": 0, 00:28:59.723 "ns_manage": 0 00:28:59.723 }, 00:28:59.723 "multi_ctrlr": true, 00:28:59.723 "ana_reporting": false 00:28:59.723 }, 00:28:59.723 "vs": { 00:28:59.723 "nvme_version": "1.3" 00:28:59.723 }, 00:28:59.723 "ns_data": { 00:28:59.723 "id": 1, 00:28:59.723 "can_share": true 00:28:59.723 } 00:28:59.723 } 00:28:59.723 ], 00:28:59.723 "mp_policy": "active_passive" 00:28:59.723 } 00:28:59.723 } 00:28:59.723 ] 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.kMTNpfWaI8 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:59.723 15:47:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:59.723 rmmod nvme_tcp 00:28:59.981 rmmod nvme_fabrics 00:28:59.981 rmmod nvme_keyring 00:28:59.981 15:47:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:59.981 15:47:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:59.981 15:47:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:59.981 15:47:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1411370 ']' 00:28:59.981 15:47:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1411370 00:28:59.981 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 1411370 ']' 00:28:59.981 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 1411370 00:28:59.981 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:28:59.981 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:59.981 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1411370 00:28:59.981 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:59.981 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:59.981 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1411370' 00:28:59.981 killing process with pid 1411370 00:28:59.981 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 1411370 00:28:59.981 [2024-05-15 15:47:12.886335] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:59.981 [2024-05-15 15:47:12.886368] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:59.981 [2024-05-15 15:47:12.886399] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:59.981 15:47:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 1411370 00:29:00.239 15:47:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:00.239 15:47:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:00.239 15:47:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:00.239 15:47:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:00.239 15:47:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:00.239 15:47:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.239 15:47:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:00.239 15:47:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.136 15:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:02.136 00:29:02.136 real 0m5.889s 00:29:02.136 user 0m2.160s 00:29:02.136 sys 0m2.113s 00:29:02.136 15:47:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:02.136 15:47:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:02.136 ************************************ 00:29:02.136 END TEST nvmf_async_init 00:29:02.136 ************************************ 00:29:02.136 15:47:15 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:02.136 15:47:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:02.136 15:47:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:02.136 15:47:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:02.136 ************************************ 00:29:02.136 START TEST dma 00:29:02.136 ************************************ 00:29:02.136 15:47:15 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:02.426 * Looking for test storage... 00:29:02.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:02.426 15:47:15 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.426 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:29:02.426 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.426 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.426 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.426 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.426 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.426 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.426 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.426 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.426 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.426 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.426 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:02.426 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:02.426 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.426 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.426 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:02.426 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.426 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.426 15:47:15 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.426 15:47:15 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.426 15:47:15 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.426 15:47:15 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.426 15:47:15 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.427 15:47:15 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.427 15:47:15 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:29:02.427 15:47:15 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.427 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:29:02.427 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:02.427 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:02.427 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:02.427 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.427 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.427 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:02.427 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:02.427 15:47:15 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:02.427 15:47:15 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:02.427 15:47:15 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:29:02.427 00:29:02.427 real 0m0.063s 00:29:02.427 user 0m0.035s 00:29:02.427 sys 0m0.032s 00:29:02.427 15:47:15 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:02.427 15:47:15 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:29:02.427 ************************************ 00:29:02.427 END TEST dma 00:29:02.427 ************************************ 00:29:02.427 15:47:15 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:02.427 15:47:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:02.427 15:47:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:02.427 15:47:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:02.427 ************************************ 00:29:02.427 START TEST nvmf_identify 00:29:02.427 ************************************ 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:02.427 * Looking for test storage... 00:29:02.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:29:02.427 15:47:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:04.954 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:04.954 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:04.954 Found net devices under 0000:09:00.0: cvl_0_0 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.954 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:04.955 Found net devices under 0000:09:00.1: cvl_0_1 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:04.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:04.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:29:04.955 00:29:04.955 --- 10.0.0.2 ping statistics --- 00:29:04.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.955 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:04.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:04.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:29:04.955 00:29:04.955 --- 10.0.0.1 ping statistics --- 00:29:04.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.955 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1413905 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1413905 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 1413905 ']' 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:04.955 15:47:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:04.955 [2024-05-15 15:47:17.905551] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:29:04.955 [2024-05-15 15:47:17.905636] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:04.955 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.955 [2024-05-15 15:47:17.950010] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:04.955 [2024-05-15 15:47:17.985304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:05.213 [2024-05-15 15:47:18.079454] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:05.213 [2024-05-15 15:47:18.079517] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:05.213 [2024-05-15 15:47:18.079534] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:05.213 [2024-05-15 15:47:18.079547] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:05.213 [2024-05-15 15:47:18.079559] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:05.213 [2024-05-15 15:47:18.079615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.213 [2024-05-15 15:47:18.079646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:05.213 [2024-05-15 15:47:18.079763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:05.213 [2024-05-15 15:47:18.079766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:05.213 [2024-05-15 15:47:18.199687] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:05.213 Malloc0 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:05.213 [2024-05-15 15:47:18.276352] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:05.213 [2024-05-15 15:47:18.276684] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.213 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:05.213 [ 00:29:05.213 { 00:29:05.213 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:05.213 "subtype": "Discovery", 00:29:05.213 "listen_addresses": [ 00:29:05.213 { 00:29:05.214 "trtype": "TCP", 00:29:05.214 "adrfam": "IPv4", 00:29:05.214 "traddr": "10.0.0.2", 00:29:05.214 "trsvcid": "4420" 00:29:05.214 } 00:29:05.214 ], 00:29:05.214 "allow_any_host": true, 00:29:05.214 "hosts": [] 00:29:05.214 }, 00:29:05.214 { 00:29:05.214 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:05.214 "subtype": "NVMe", 00:29:05.214 "listen_addresses": [ 00:29:05.214 { 00:29:05.214 "trtype": "TCP", 00:29:05.214 "adrfam": "IPv4", 00:29:05.214 "traddr": "10.0.0.2", 00:29:05.214 "trsvcid": "4420" 00:29:05.214 } 00:29:05.214 ], 00:29:05.214 "allow_any_host": true, 00:29:05.214 "hosts": [], 00:29:05.214 "serial_number": "SPDK00000000000001", 00:29:05.214 "model_number": "SPDK bdev Controller", 00:29:05.214 "max_namespaces": 32, 00:29:05.214 "min_cntlid": 1, 00:29:05.214 "max_cntlid": 65519, 00:29:05.214 "namespaces": [ 00:29:05.214 { 00:29:05.214 "nsid": 1, 00:29:05.214 "bdev_name": "Malloc0", 00:29:05.214 "name": "Malloc0", 00:29:05.214 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:05.214 "eui64": "ABCDEF0123456789", 00:29:05.214 "uuid": "090fd109-cec3-40c6-8c22-5bf609e797db" 00:29:05.214 } 00:29:05.214 ] 00:29:05.214 } 00:29:05.214 ] 00:29:05.214 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.214 15:47:18 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:05.474 [2024-05-15 15:47:18.318213] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:29:05.474 [2024-05-15 15:47:18.318277] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1413929 ] 00:29:05.474 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.474 [2024-05-15 15:47:18.337985] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:05.474 [2024-05-15 15:47:18.355680] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:29:05.474 [2024-05-15 15:47:18.355744] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:05.474 [2024-05-15 15:47:18.355754] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:05.474 [2024-05-15 15:47:18.355769] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:05.474 [2024-05-15 15:47:18.355783] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:05.474 [2024-05-15 15:47:18.356132] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:29:05.474 [2024-05-15 15:47:18.356195] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xbee450 0 00:29:05.474 [2024-05-15 15:47:18.362247] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:05.474 [2024-05-15 15:47:18.362270] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:05.474 [2024-05-15 15:47:18.362279] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:05.474 [2024-05-15 15:47:18.362285] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:05.474 [2024-05-15 15:47:18.362356] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.474 [2024-05-15 15:47:18.362370] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.474 [2024-05-15 15:47:18.362378] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbee450) 00:29:05.474 [2024-05-15 15:47:18.362398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:05.474 [2024-05-15 15:47:18.362431] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc55800, cid 0, qid 0 00:29:05.474 [2024-05-15 15:47:18.370231] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.474 [2024-05-15 15:47:18.370250] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.474 [2024-05-15 15:47:18.370258] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.474 [2024-05-15 15:47:18.370266] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc55800) on tqpair=0xbee450 00:29:05.474 [2024-05-15 15:47:18.370289] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:05.474 [2024-05-15 15:47:18.370301] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:29:05.474 [2024-05-15 15:47:18.370313] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:29:05.474 [2024-05-15 15:47:18.370336] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.474 [2024-05-15 15:47:18.370345] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.474 [2024-05-15 15:47:18.370352] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbee450) 00:29:05.474 [2024-05-15 15:47:18.370363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.474 [2024-05-15 15:47:18.370388] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc55800, cid 0, qid 0 00:29:05.474 [2024-05-15 15:47:18.370553] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.474 [2024-05-15 15:47:18.370567] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.474 [2024-05-15 15:47:18.370574] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.474 [2024-05-15 15:47:18.370581] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc55800) on tqpair=0xbee450 00:29:05.474 [2024-05-15 15:47:18.370591] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:29:05.474 [2024-05-15 15:47:18.370605] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:29:05.474 [2024-05-15 15:47:18.370617] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.474 [2024-05-15 15:47:18.370625] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.474 [2024-05-15 15:47:18.370632] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbee450) 00:29:05.474 [2024-05-15 15:47:18.370642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.474 [2024-05-15 15:47:18.370664] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc55800, cid 0, qid 0 00:29:05.474 [2024-05-15 15:47:18.370800] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.474 [2024-05-15 15:47:18.370814] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.474 [2024-05-15 15:47:18.370821] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.474 [2024-05-15 15:47:18.370828] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc55800) on tqpair=0xbee450 00:29:05.474 [2024-05-15 15:47:18.370837] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:29:05.474 [2024-05-15 15:47:18.370852] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:29:05.474 [2024-05-15 15:47:18.370865] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.474 [2024-05-15 15:47:18.370872] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.474 [2024-05-15 15:47:18.370879] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbee450) 00:29:05.474 [2024-05-15 15:47:18.370889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.474 [2024-05-15 15:47:18.370915] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc55800, cid 0, qid 0 00:29:05.474 [2024-05-15 15:47:18.371023] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.474 [2024-05-15 15:47:18.371034] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.474 [2024-05-15 15:47:18.371041] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.474 [2024-05-15 15:47:18.371048] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc55800) on tqpair=0xbee450 00:29:05.474 [2024-05-15 15:47:18.371058] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:05.474 [2024-05-15 15:47:18.371074] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.474 [2024-05-15 15:47:18.371083] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.474 [2024-05-15 15:47:18.371090] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbee450) 00:29:05.474 [2024-05-15 15:47:18.371100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.474 [2024-05-15 15:47:18.371121] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc55800, cid 0, qid 0 00:29:05.474 [2024-05-15 15:47:18.371228] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.474 [2024-05-15 15:47:18.371242] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.474 [2024-05-15 15:47:18.371249] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.474 [2024-05-15 15:47:18.371256] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc55800) on tqpair=0xbee450 00:29:05.474 [2024-05-15 15:47:18.371265] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:29:05.474 [2024-05-15 15:47:18.371275] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:29:05.474 [2024-05-15 15:47:18.371288] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:05.474 [2024-05-15 15:47:18.371398] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:29:05.475 [2024-05-15 15:47:18.371407] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:05.475 [2024-05-15 15:47:18.371422] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.371430] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.371436] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbee450) 00:29:05.475 [2024-05-15 15:47:18.371447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.475 [2024-05-15 15:47:18.371469] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc55800, cid 0, qid 0 00:29:05.475 [2024-05-15 15:47:18.371628] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.475 [2024-05-15 15:47:18.371643] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.475 [2024-05-15 15:47:18.371650] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.371656] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc55800) on tqpair=0xbee450 00:29:05.475 [2024-05-15 15:47:18.371665] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:05.475 [2024-05-15 15:47:18.371682] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.371691] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.371698] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbee450) 00:29:05.475 [2024-05-15 15:47:18.371713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.475 [2024-05-15 15:47:18.371734] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc55800, cid 0, qid 0 00:29:05.475 [2024-05-15 15:47:18.371849] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.475 [2024-05-15 15:47:18.371864] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.475 [2024-05-15 15:47:18.371871] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.371878] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc55800) on tqpair=0xbee450 00:29:05.475 [2024-05-15 15:47:18.371887] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:05.475 [2024-05-15 15:47:18.371896] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:29:05.475 [2024-05-15 15:47:18.371910] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:29:05.475 [2024-05-15 15:47:18.371932] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:29:05.475 [2024-05-15 15:47:18.371948] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.371956] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbee450) 00:29:05.475 [2024-05-15 15:47:18.371967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.475 [2024-05-15 15:47:18.371988] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc55800, cid 0, qid 0 00:29:05.475 [2024-05-15 15:47:18.372152] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:05.475 [2024-05-15 15:47:18.372167] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:05.475 [2024-05-15 15:47:18.372174] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.372181] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbee450): datao=0, datal=4096, cccid=0 00:29:05.475 [2024-05-15 15:47:18.372189] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc55800) on tqpair(0xbee450): expected_datao=0, payload_size=4096 00:29:05.475 [2024-05-15 15:47:18.372198] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.372210] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.372228] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.372255] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.475 [2024-05-15 15:47:18.372266] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.475 [2024-05-15 15:47:18.372273] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.372280] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc55800) on tqpair=0xbee450 00:29:05.475 [2024-05-15 15:47:18.372292] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:29:05.475 [2024-05-15 15:47:18.372302] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:29:05.475 [2024-05-15 15:47:18.372310] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:29:05.475 [2024-05-15 15:47:18.372325] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:29:05.475 [2024-05-15 15:47:18.372334] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:29:05.475 [2024-05-15 15:47:18.372343] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:29:05.475 [2024-05-15 15:47:18.372361] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:29:05.475 [2024-05-15 15:47:18.372375] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.372382] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.372389] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbee450) 00:29:05.475 [2024-05-15 15:47:18.372401] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:05.475 [2024-05-15 15:47:18.372422] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc55800, cid 0, qid 0 00:29:05.475 [2024-05-15 15:47:18.372579] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.475 [2024-05-15 15:47:18.372593] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.475 [2024-05-15 15:47:18.372600] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.372607] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc55800) on tqpair=0xbee450 00:29:05.475 [2024-05-15 15:47:18.372621] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.372629] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.372635] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbee450) 00:29:05.475 [2024-05-15 15:47:18.372645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:05.475 [2024-05-15 15:47:18.372656] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.372662] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.372669] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xbee450) 00:29:05.475 [2024-05-15 15:47:18.372678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:05.475 [2024-05-15 15:47:18.372688] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.372695] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.372701] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xbee450) 00:29:05.475 [2024-05-15 15:47:18.372710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:05.475 [2024-05-15 15:47:18.372720] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.372727] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.372733] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbee450) 00:29:05.475 [2024-05-15 15:47:18.372742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:05.475 [2024-05-15 15:47:18.372751] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:29:05.475 [2024-05-15 15:47:18.372771] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:05.475 [2024-05-15 15:47:18.372783] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.372805] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbee450) 00:29:05.475 [2024-05-15 15:47:18.372816] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.475 [2024-05-15 15:47:18.372838] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc55800, cid 0, qid 0 00:29:05.475 [2024-05-15 15:47:18.372863] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc55960, cid 1, qid 0 00:29:05.475 [2024-05-15 15:47:18.372878] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc55ac0, cid 2, qid 0 00:29:05.475 [2024-05-15 15:47:18.372886] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc55c20, cid 3, qid 0 00:29:05.475 [2024-05-15 15:47:18.372894] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc55d80, cid 4, qid 0 00:29:05.475 [2024-05-15 15:47:18.373049] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.475 [2024-05-15 15:47:18.373064] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.475 [2024-05-15 15:47:18.373071] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.373078] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc55d80) on tqpair=0xbee450 00:29:05.475 [2024-05-15 15:47:18.373088] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:29:05.475 [2024-05-15 15:47:18.373098] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:29:05.475 [2024-05-15 15:47:18.373116] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.373125] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbee450) 00:29:05.475 [2024-05-15 15:47:18.373136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.475 [2024-05-15 15:47:18.373157] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc55d80, cid 4, qid 0 00:29:05.475 [2024-05-15 15:47:18.373316] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:05.475 [2024-05-15 15:47:18.373332] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:05.475 [2024-05-15 15:47:18.373339] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.373345] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbee450): datao=0, datal=4096, cccid=4 00:29:05.475 [2024-05-15 15:47:18.373353] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc55d80) on tqpair(0xbee450): expected_datao=0, payload_size=4096 00:29:05.475 [2024-05-15 15:47:18.373361] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.373377] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.373386] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:05.475 [2024-05-15 15:47:18.416242] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.475 [2024-05-15 15:47:18.416262] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.476 [2024-05-15 15:47:18.416269] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.476 [2024-05-15 15:47:18.416276] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc55d80) on tqpair=0xbee450 00:29:05.476 [2024-05-15 15:47:18.416299] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:29:05.476 [2024-05-15 15:47:18.416342] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.476 [2024-05-15 15:47:18.416353] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbee450) 00:29:05.476 [2024-05-15 15:47:18.416365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.476 [2024-05-15 15:47:18.416377] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.476 [2024-05-15 15:47:18.416385] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.476 [2024-05-15 15:47:18.416391] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbee450) 00:29:05.476 [2024-05-15 15:47:18.416401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:05.476 [2024-05-15 15:47:18.416430] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc55d80, cid 4, qid 0 00:29:05.476 [2024-05-15 15:47:18.416446] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc55ee0, cid 5, qid 0 00:29:05.476 [2024-05-15 15:47:18.416606] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:05.476 [2024-05-15 15:47:18.416622] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:05.476 [2024-05-15 15:47:18.416629] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:05.476 [2024-05-15 15:47:18.416636] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbee450): datao=0, datal=1024, cccid=4 00:29:05.476 [2024-05-15 15:47:18.416644] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc55d80) on tqpair(0xbee450): expected_datao=0, payload_size=1024 00:29:05.476 [2024-05-15 15:47:18.416652] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.476 [2024-05-15 15:47:18.416662] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:05.476 [2024-05-15 15:47:18.416669] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:05.476 [2024-05-15 15:47:18.416678] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.476 [2024-05-15 15:47:18.416687] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.476 [2024-05-15 15:47:18.416693] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.476 [2024-05-15 15:47:18.416700] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc55ee0) on tqpair=0xbee450 00:29:05.476 [2024-05-15 15:47:18.457375] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.476 [2024-05-15 15:47:18.457395] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.476 [2024-05-15 15:47:18.457402] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.476 [2024-05-15 15:47:18.457409] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc55d80) on tqpair=0xbee450 00:29:05.476 [2024-05-15 15:47:18.457429] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.476 [2024-05-15 15:47:18.457439] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbee450) 00:29:05.476 [2024-05-15 15:47:18.457451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.476 [2024-05-15 15:47:18.457480] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc55d80, cid 4, qid 0 00:29:05.476 [2024-05-15 15:47:18.457620] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:05.476 [2024-05-15 15:47:18.457632] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:05.476 [2024-05-15 15:47:18.457639] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:05.476 [2024-05-15 15:47:18.457645] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbee450): datao=0, datal=3072, cccid=4 00:29:05.476 [2024-05-15 15:47:18.457653] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc55d80) on tqpair(0xbee450): expected_datao=0, payload_size=3072 00:29:05.476 [2024-05-15 15:47:18.457661] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.476 [2024-05-15 15:47:18.457671] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:05.476 [2024-05-15 15:47:18.457679] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:05.476 [2024-05-15 15:47:18.457729] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.476 [2024-05-15 15:47:18.457743] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.476 [2024-05-15 15:47:18.457750] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.476 [2024-05-15 15:47:18.457757] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc55d80) on tqpair=0xbee450 00:29:05.476 [2024-05-15 15:47:18.457771] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.476 [2024-05-15 15:47:18.457780] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbee450) 00:29:05.476 [2024-05-15 15:47:18.457791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.476 [2024-05-15 15:47:18.457819] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc55d80, cid 4, qid 0 00:29:05.476 [2024-05-15 15:47:18.457949] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:05.476 [2024-05-15 15:47:18.457964] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:05.476 [2024-05-15 15:47:18.457971] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:05.476 [2024-05-15 15:47:18.457977] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbee450): datao=0, datal=8, cccid=4 00:29:05.476 [2024-05-15 15:47:18.457985] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc55d80) on tqpair(0xbee450): expected_datao=0, payload_size=8 00:29:05.476 [2024-05-15 15:47:18.457993] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.476 [2024-05-15 15:47:18.458003] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:05.476 [2024-05-15 15:47:18.458010] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:05.476 [2024-05-15 15:47:18.499352] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.476 [2024-05-15 15:47:18.499371] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.476 [2024-05-15 15:47:18.499379] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.476 [2024-05-15 15:47:18.499386] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc55d80) on tqpair=0xbee450 00:29:05.476 ===================================================== 00:29:05.476 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:05.476 ===================================================== 00:29:05.476 Controller Capabilities/Features 00:29:05.476 ================================ 00:29:05.476 Vendor ID: 0000 00:29:05.476 Subsystem Vendor ID: 0000 00:29:05.476 Serial Number: .................... 00:29:05.476 Model Number: ........................................ 00:29:05.476 Firmware Version: 24.05 00:29:05.476 Recommended Arb Burst: 0 00:29:05.476 IEEE OUI Identifier: 00 00 00 00:29:05.476 Multi-path I/O 00:29:05.476 May have multiple subsystem ports: No 00:29:05.476 May have multiple controllers: No 00:29:05.476 Associated with SR-IOV VF: No 00:29:05.476 Max Data Transfer Size: 131072 00:29:05.476 Max Number of Namespaces: 0 00:29:05.476 Max Number of I/O Queues: 1024 00:29:05.476 NVMe Specification Version (VS): 1.3 00:29:05.476 NVMe Specification Version (Identify): 1.3 00:29:05.476 Maximum Queue Entries: 128 00:29:05.476 Contiguous Queues Required: Yes 00:29:05.476 Arbitration Mechanisms Supported 00:29:05.476 Weighted Round Robin: Not Supported 00:29:05.476 Vendor Specific: Not Supported 00:29:05.476 Reset Timeout: 15000 ms 00:29:05.476 Doorbell Stride: 4 bytes 00:29:05.476 NVM Subsystem Reset: Not Supported 00:29:05.476 Command Sets Supported 00:29:05.476 NVM Command Set: Supported 00:29:05.476 Boot Partition: Not Supported 00:29:05.476 Memory Page Size Minimum: 4096 bytes 00:29:05.476 Memory Page Size Maximum: 4096 bytes 00:29:05.476 Persistent Memory Region: Not Supported 00:29:05.476 Optional Asynchronous Events Supported 00:29:05.476 Namespace Attribute Notices: Not Supported 00:29:05.476 Firmware Activation Notices: Not Supported 00:29:05.476 ANA Change Notices: Not Supported 00:29:05.476 PLE Aggregate Log Change Notices: Not Supported 00:29:05.476 LBA Status Info Alert Notices: Not Supported 00:29:05.476 EGE Aggregate Log Change Notices: Not Supported 00:29:05.476 Normal NVM Subsystem Shutdown event: Not Supported 00:29:05.476 Zone Descriptor Change Notices: Not Supported 00:29:05.476 Discovery Log Change Notices: Supported 00:29:05.476 Controller Attributes 00:29:05.476 128-bit Host Identifier: Not Supported 00:29:05.476 Non-Operational Permissive Mode: Not Supported 00:29:05.476 NVM Sets: Not Supported 00:29:05.476 Read Recovery Levels: Not Supported 00:29:05.476 Endurance Groups: Not Supported 00:29:05.476 Predictable Latency Mode: Not Supported 00:29:05.476 Traffic Based Keep ALive: Not Supported 00:29:05.476 Namespace Granularity: Not Supported 00:29:05.476 SQ Associations: Not Supported 00:29:05.476 UUID List: Not Supported 00:29:05.476 Multi-Domain Subsystem: Not Supported 00:29:05.476 Fixed Capacity Management: Not Supported 00:29:05.476 Variable Capacity Management: Not Supported 00:29:05.476 Delete Endurance Group: Not Supported 00:29:05.476 Delete NVM Set: Not Supported 00:29:05.476 Extended LBA Formats Supported: Not Supported 00:29:05.476 Flexible Data Placement Supported: Not Supported 00:29:05.476 00:29:05.476 Controller Memory Buffer Support 00:29:05.476 ================================ 00:29:05.476 Supported: No 00:29:05.476 00:29:05.476 Persistent Memory Region Support 00:29:05.476 ================================ 00:29:05.476 Supported: No 00:29:05.476 00:29:05.476 Admin Command Set Attributes 00:29:05.476 ============================ 00:29:05.476 Security Send/Receive: Not Supported 00:29:05.476 Format NVM: Not Supported 00:29:05.476 Firmware Activate/Download: Not Supported 00:29:05.476 Namespace Management: Not Supported 00:29:05.476 Device Self-Test: Not Supported 00:29:05.476 Directives: Not Supported 00:29:05.476 NVMe-MI: Not Supported 00:29:05.476 Virtualization Management: Not Supported 00:29:05.476 Doorbell Buffer Config: Not Supported 00:29:05.476 Get LBA Status Capability: Not Supported 00:29:05.476 Command & Feature Lockdown Capability: Not Supported 00:29:05.476 Abort Command Limit: 1 00:29:05.477 Async Event Request Limit: 4 00:29:05.477 Number of Firmware Slots: N/A 00:29:05.477 Firmware Slot 1 Read-Only: N/A 00:29:05.477 Firmware Activation Without Reset: N/A 00:29:05.477 Multiple Update Detection Support: N/A 00:29:05.477 Firmware Update Granularity: No Information Provided 00:29:05.477 Per-Namespace SMART Log: No 00:29:05.477 Asymmetric Namespace Access Log Page: Not Supported 00:29:05.477 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:05.477 Command Effects Log Page: Not Supported 00:29:05.477 Get Log Page Extended Data: Supported 00:29:05.477 Telemetry Log Pages: Not Supported 00:29:05.477 Persistent Event Log Pages: Not Supported 00:29:05.477 Supported Log Pages Log Page: May Support 00:29:05.477 Commands Supported & Effects Log Page: Not Supported 00:29:05.477 Feature Identifiers & Effects Log Page:May Support 00:29:05.477 NVMe-MI Commands & Effects Log Page: May Support 00:29:05.477 Data Area 4 for Telemetry Log: Not Supported 00:29:05.477 Error Log Page Entries Supported: 128 00:29:05.477 Keep Alive: Not Supported 00:29:05.477 00:29:05.477 NVM Command Set Attributes 00:29:05.477 ========================== 00:29:05.477 Submission Queue Entry Size 00:29:05.477 Max: 1 00:29:05.477 Min: 1 00:29:05.477 Completion Queue Entry Size 00:29:05.477 Max: 1 00:29:05.477 Min: 1 00:29:05.477 Number of Namespaces: 0 00:29:05.477 Compare Command: Not Supported 00:29:05.477 Write Uncorrectable Command: Not Supported 00:29:05.477 Dataset Management Command: Not Supported 00:29:05.477 Write Zeroes Command: Not Supported 00:29:05.477 Set Features Save Field: Not Supported 00:29:05.477 Reservations: Not Supported 00:29:05.477 Timestamp: Not Supported 00:29:05.477 Copy: Not Supported 00:29:05.477 Volatile Write Cache: Not Present 00:29:05.477 Atomic Write Unit (Normal): 1 00:29:05.477 Atomic Write Unit (PFail): 1 00:29:05.477 Atomic Compare & Write Unit: 1 00:29:05.477 Fused Compare & Write: Supported 00:29:05.477 Scatter-Gather List 00:29:05.477 SGL Command Set: Supported 00:29:05.477 SGL Keyed: Supported 00:29:05.477 SGL Bit Bucket Descriptor: Not Supported 00:29:05.477 SGL Metadata Pointer: Not Supported 00:29:05.477 Oversized SGL: Not Supported 00:29:05.477 SGL Metadata Address: Not Supported 00:29:05.477 SGL Offset: Supported 00:29:05.477 Transport SGL Data Block: Not Supported 00:29:05.477 Replay Protected Memory Block: Not Supported 00:29:05.477 00:29:05.477 Firmware Slot Information 00:29:05.477 ========================= 00:29:05.477 Active slot: 0 00:29:05.477 00:29:05.477 00:29:05.477 Error Log 00:29:05.477 ========= 00:29:05.477 00:29:05.477 Active Namespaces 00:29:05.477 ================= 00:29:05.477 Discovery Log Page 00:29:05.477 ================== 00:29:05.477 Generation Counter: 2 00:29:05.477 Number of Records: 2 00:29:05.477 Record Format: 0 00:29:05.477 00:29:05.477 Discovery Log Entry 0 00:29:05.477 ---------------------- 00:29:05.477 Transport Type: 3 (TCP) 00:29:05.477 Address Family: 1 (IPv4) 00:29:05.477 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:05.477 Entry Flags: 00:29:05.477 Duplicate Returned Information: 1 00:29:05.477 Explicit Persistent Connection Support for Discovery: 1 00:29:05.477 Transport Requirements: 00:29:05.477 Secure Channel: Not Required 00:29:05.477 Port ID: 0 (0x0000) 00:29:05.477 Controller ID: 65535 (0xffff) 00:29:05.477 Admin Max SQ Size: 128 00:29:05.477 Transport Service Identifier: 4420 00:29:05.477 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:05.477 Transport Address: 10.0.0.2 00:29:05.477 Discovery Log Entry 1 00:29:05.477 ---------------------- 00:29:05.477 Transport Type: 3 (TCP) 00:29:05.477 Address Family: 1 (IPv4) 00:29:05.477 Subsystem Type: 2 (NVM Subsystem) 00:29:05.477 Entry Flags: 00:29:05.477 Duplicate Returned Information: 0 00:29:05.477 Explicit Persistent Connection Support for Discovery: 0 00:29:05.477 Transport Requirements: 00:29:05.477 Secure Channel: Not Required 00:29:05.477 Port ID: 0 (0x0000) 00:29:05.477 Controller ID: 65535 (0xffff) 00:29:05.477 Admin Max SQ Size: 128 00:29:05.477 Transport Service Identifier: 4420 00:29:05.477 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:05.477 Transport Address: 10.0.0.2 [2024-05-15 15:47:18.499509] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:29:05.477 [2024-05-15 15:47:18.499536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.477 [2024-05-15 15:47:18.499549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.477 [2024-05-15 15:47:18.499559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.477 [2024-05-15 15:47:18.499569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.477 [2024-05-15 15:47:18.499583] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.477 [2024-05-15 15:47:18.499591] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.477 [2024-05-15 15:47:18.499598] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbee450) 00:29:05.477 [2024-05-15 15:47:18.499609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.477 [2024-05-15 15:47:18.499650] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc55c20, cid 3, qid 0 00:29:05.477 [2024-05-15 15:47:18.499861] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.477 [2024-05-15 15:47:18.499876] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.477 [2024-05-15 15:47:18.499884] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.477 [2024-05-15 15:47:18.499890] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc55c20) on tqpair=0xbee450 00:29:05.477 [2024-05-15 15:47:18.499909] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.477 [2024-05-15 15:47:18.499918] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.477 [2024-05-15 15:47:18.499925] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbee450) 00:29:05.477 [2024-05-15 15:47:18.499936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.477 [2024-05-15 15:47:18.499963] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc55c20, cid 3, qid 0 00:29:05.477 [2024-05-15 15:47:18.500082] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.477 [2024-05-15 15:47:18.500097] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.477 [2024-05-15 15:47:18.500103] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.477 [2024-05-15 15:47:18.500110] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc55c20) on tqpair=0xbee450 00:29:05.477 [2024-05-15 15:47:18.500126] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:29:05.477 [2024-05-15 15:47:18.500136] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:29:05.477 [2024-05-15 15:47:18.500153] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.477 [2024-05-15 15:47:18.500162] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.477 [2024-05-15 15:47:18.500169] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbee450) 00:29:05.477 [2024-05-15 15:47:18.500179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.477 [2024-05-15 15:47:18.500200] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc55c20, cid 3, qid 0 00:29:05.477 [2024-05-15 15:47:18.504244] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.477 [2024-05-15 15:47:18.504260] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.477 [2024-05-15 15:47:18.504268] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.477 [2024-05-15 15:47:18.504274] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc55c20) on tqpair=0xbee450 00:29:05.477 [2024-05-15 15:47:18.504306] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.477 [2024-05-15 15:47:18.504316] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.477 [2024-05-15 15:47:18.504323] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbee450) 00:29:05.477 [2024-05-15 15:47:18.504334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.477 [2024-05-15 15:47:18.504357] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc55c20, cid 3, qid 0 00:29:05.477 [2024-05-15 15:47:18.504480] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.477 [2024-05-15 15:47:18.504495] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.477 [2024-05-15 15:47:18.504502] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.477 [2024-05-15 15:47:18.504509] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc55c20) on tqpair=0xbee450 00:29:05.477 [2024-05-15 15:47:18.504523] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:29:05.477 00:29:05.477 15:47:18 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:05.477 [2024-05-15 15:47:18.537989] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:29:05.477 [2024-05-15 15:47:18.538036] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1413936 ] 00:29:05.477 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.477 [2024-05-15 15:47:18.556519] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:05.477 [2024-05-15 15:47:18.574108] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:29:05.478 [2024-05-15 15:47:18.574153] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:05.478 [2024-05-15 15:47:18.574163] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:05.478 [2024-05-15 15:47:18.574177] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:05.478 [2024-05-15 15:47:18.574188] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:05.478 [2024-05-15 15:47:18.574422] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:29:05.478 [2024-05-15 15:47:18.574464] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb82450 0 00:29:05.736 [2024-05-15 15:47:18.581469] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:05.736 [2024-05-15 15:47:18.581490] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:05.736 [2024-05-15 15:47:18.581498] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:05.736 [2024-05-15 15:47:18.581505] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:05.736 [2024-05-15 15:47:18.581543] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.581555] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.581562] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb82450) 00:29:05.737 [2024-05-15 15:47:18.581577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:05.737 [2024-05-15 15:47:18.581604] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9800, cid 0, qid 0 00:29:05.737 [2024-05-15 15:47:18.589232] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.737 [2024-05-15 15:47:18.589252] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.737 [2024-05-15 15:47:18.589259] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.589266] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9800) on tqpair=0xb82450 00:29:05.737 [2024-05-15 15:47:18.589281] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:05.737 [2024-05-15 15:47:18.589307] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:29:05.737 [2024-05-15 15:47:18.589317] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:29:05.737 [2024-05-15 15:47:18.589336] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.589345] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.589352] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb82450) 00:29:05.737 [2024-05-15 15:47:18.589364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.737 [2024-05-15 15:47:18.589390] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9800, cid 0, qid 0 00:29:05.737 [2024-05-15 15:47:18.589520] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.737 [2024-05-15 15:47:18.589533] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.737 [2024-05-15 15:47:18.589541] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.589548] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9800) on tqpair=0xb82450 00:29:05.737 [2024-05-15 15:47:18.589557] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:29:05.737 [2024-05-15 15:47:18.589570] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:29:05.737 [2024-05-15 15:47:18.589583] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.589591] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.589598] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb82450) 00:29:05.737 [2024-05-15 15:47:18.589609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.737 [2024-05-15 15:47:18.589631] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9800, cid 0, qid 0 00:29:05.737 [2024-05-15 15:47:18.589732] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.737 [2024-05-15 15:47:18.589748] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.737 [2024-05-15 15:47:18.589756] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.589763] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9800) on tqpair=0xb82450 00:29:05.737 [2024-05-15 15:47:18.589772] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:29:05.737 [2024-05-15 15:47:18.589787] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:29:05.737 [2024-05-15 15:47:18.589800] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.589807] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.589814] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb82450) 00:29:05.737 [2024-05-15 15:47:18.589825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.737 [2024-05-15 15:47:18.589847] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9800, cid 0, qid 0 00:29:05.737 [2024-05-15 15:47:18.589949] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.737 [2024-05-15 15:47:18.589961] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.737 [2024-05-15 15:47:18.589968] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.589975] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9800) on tqpair=0xb82450 00:29:05.737 [2024-05-15 15:47:18.589983] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:05.737 [2024-05-15 15:47:18.590001] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.590010] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.590017] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb82450) 00:29:05.737 [2024-05-15 15:47:18.590028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.737 [2024-05-15 15:47:18.590049] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9800, cid 0, qid 0 00:29:05.737 [2024-05-15 15:47:18.590149] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.737 [2024-05-15 15:47:18.590161] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.737 [2024-05-15 15:47:18.590168] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.590175] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9800) on tqpair=0xb82450 00:29:05.737 [2024-05-15 15:47:18.590183] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:29:05.737 [2024-05-15 15:47:18.590191] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:29:05.737 [2024-05-15 15:47:18.590205] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:05.737 [2024-05-15 15:47:18.590315] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:29:05.737 [2024-05-15 15:47:18.590324] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:05.737 [2024-05-15 15:47:18.590336] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.590344] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.590351] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb82450) 00:29:05.737 [2024-05-15 15:47:18.590362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.737 [2024-05-15 15:47:18.590390] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9800, cid 0, qid 0 00:29:05.737 [2024-05-15 15:47:18.590497] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.737 [2024-05-15 15:47:18.590513] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.737 [2024-05-15 15:47:18.590520] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.590527] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9800) on tqpair=0xb82450 00:29:05.737 [2024-05-15 15:47:18.590536] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:05.737 [2024-05-15 15:47:18.590553] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.590563] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.590570] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb82450) 00:29:05.737 [2024-05-15 15:47:18.590581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.737 [2024-05-15 15:47:18.590602] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9800, cid 0, qid 0 00:29:05.737 [2024-05-15 15:47:18.590708] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.737 [2024-05-15 15:47:18.590723] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.737 [2024-05-15 15:47:18.590730] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.590737] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9800) on tqpair=0xb82450 00:29:05.737 [2024-05-15 15:47:18.590745] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:05.737 [2024-05-15 15:47:18.590754] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:29:05.737 [2024-05-15 15:47:18.590768] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:29:05.737 [2024-05-15 15:47:18.590786] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:29:05.737 [2024-05-15 15:47:18.590800] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.590808] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb82450) 00:29:05.737 [2024-05-15 15:47:18.590819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.737 [2024-05-15 15:47:18.590841] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9800, cid 0, qid 0 00:29:05.737 [2024-05-15 15:47:18.590974] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:05.737 [2024-05-15 15:47:18.590990] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:05.737 [2024-05-15 15:47:18.590997] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.591003] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb82450): datao=0, datal=4096, cccid=0 00:29:05.737 [2024-05-15 15:47:18.591011] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbe9800) on tqpair(0xb82450): expected_datao=0, payload_size=4096 00:29:05.737 [2024-05-15 15:47:18.591019] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.591042] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.591051] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.591116] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.737 [2024-05-15 15:47:18.591131] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.737 [2024-05-15 15:47:18.591138] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.591145] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9800) on tqpair=0xb82450 00:29:05.737 [2024-05-15 15:47:18.591160] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:29:05.737 [2024-05-15 15:47:18.591170] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:29:05.737 [2024-05-15 15:47:18.591177] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:29:05.737 [2024-05-15 15:47:18.591188] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:29:05.737 [2024-05-15 15:47:18.591197] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:29:05.737 [2024-05-15 15:47:18.591205] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:29:05.737 [2024-05-15 15:47:18.591227] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:29:05.737 [2024-05-15 15:47:18.591241] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.591249] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.591255] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb82450) 00:29:05.737 [2024-05-15 15:47:18.591266] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:05.737 [2024-05-15 15:47:18.591289] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9800, cid 0, qid 0 00:29:05.737 [2024-05-15 15:47:18.591413] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.737 [2024-05-15 15:47:18.591429] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.737 [2024-05-15 15:47:18.591436] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.591443] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9800) on tqpair=0xb82450 00:29:05.737 [2024-05-15 15:47:18.591453] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.591461] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.591468] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb82450) 00:29:05.737 [2024-05-15 15:47:18.591478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:05.737 [2024-05-15 15:47:18.591489] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.591496] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.591503] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb82450) 00:29:05.737 [2024-05-15 15:47:18.591512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:05.737 [2024-05-15 15:47:18.591522] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.591529] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.591536] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb82450) 00:29:05.737 [2024-05-15 15:47:18.591545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:05.737 [2024-05-15 15:47:18.591555] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.591562] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.591569] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb82450) 00:29:05.737 [2024-05-15 15:47:18.591578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:05.737 [2024-05-15 15:47:18.591587] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:05.737 [2024-05-15 15:47:18.591612] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:05.737 [2024-05-15 15:47:18.591626] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.591634] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb82450) 00:29:05.737 [2024-05-15 15:47:18.591645] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.737 [2024-05-15 15:47:18.591683] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9800, cid 0, qid 0 00:29:05.737 [2024-05-15 15:47:18.591695] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9960, cid 1, qid 0 00:29:05.737 [2024-05-15 15:47:18.591702] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9ac0, cid 2, qid 0 00:29:05.737 [2024-05-15 15:47:18.591710] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9c20, cid 3, qid 0 00:29:05.737 [2024-05-15 15:47:18.591733] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9d80, cid 4, qid 0 00:29:05.737 [2024-05-15 15:47:18.591870] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.737 [2024-05-15 15:47:18.591882] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.737 [2024-05-15 15:47:18.591889] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.591896] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9d80) on tqpair=0xb82450 00:29:05.737 [2024-05-15 15:47:18.591905] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:29:05.737 [2024-05-15 15:47:18.591914] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:05.737 [2024-05-15 15:47:18.591928] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:29:05.737 [2024-05-15 15:47:18.591941] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:05.737 [2024-05-15 15:47:18.591952] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.591960] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.591966] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb82450) 00:29:05.737 [2024-05-15 15:47:18.591977] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:05.737 [2024-05-15 15:47:18.591998] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9d80, cid 4, qid 0 00:29:05.737 [2024-05-15 15:47:18.592120] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.737 [2024-05-15 15:47:18.592132] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.737 [2024-05-15 15:47:18.592139] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.592146] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9d80) on tqpair=0xb82450 00:29:05.737 [2024-05-15 15:47:18.592204] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:29:05.737 [2024-05-15 15:47:18.592235] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:05.737 [2024-05-15 15:47:18.592252] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.737 [2024-05-15 15:47:18.592260] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb82450) 00:29:05.737 [2024-05-15 15:47:18.592271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.737 [2024-05-15 15:47:18.592299] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9d80, cid 4, qid 0 00:29:05.737 [2024-05-15 15:47:18.592420] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:05.738 [2024-05-15 15:47:18.592436] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:05.738 [2024-05-15 15:47:18.592443] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.592449] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb82450): datao=0, datal=4096, cccid=4 00:29:05.738 [2024-05-15 15:47:18.592457] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbe9d80) on tqpair(0xb82450): expected_datao=0, payload_size=4096 00:29:05.738 [2024-05-15 15:47:18.592465] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.592482] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.592492] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.592563] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.738 [2024-05-15 15:47:18.592578] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.738 [2024-05-15 15:47:18.592585] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.592592] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9d80) on tqpair=0xb82450 00:29:05.738 [2024-05-15 15:47:18.592610] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:29:05.738 [2024-05-15 15:47:18.592628] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:29:05.738 [2024-05-15 15:47:18.592647] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:29:05.738 [2024-05-15 15:47:18.592661] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.592669] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb82450) 00:29:05.738 [2024-05-15 15:47:18.592680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.738 [2024-05-15 15:47:18.592702] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9d80, cid 4, qid 0 00:29:05.738 [2024-05-15 15:47:18.592830] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:05.738 [2024-05-15 15:47:18.592845] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:05.738 [2024-05-15 15:47:18.592852] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.592859] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb82450): datao=0, datal=4096, cccid=4 00:29:05.738 [2024-05-15 15:47:18.592867] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbe9d80) on tqpair(0xb82450): expected_datao=0, payload_size=4096 00:29:05.738 [2024-05-15 15:47:18.592874] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.592910] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.592919] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.593022] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.738 [2024-05-15 15:47:18.593037] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.738 [2024-05-15 15:47:18.593044] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.593051] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9d80) on tqpair=0xb82450 00:29:05.738 [2024-05-15 15:47:18.593075] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:05.738 [2024-05-15 15:47:18.593095] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:05.738 [2024-05-15 15:47:18.593108] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.593120] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb82450) 00:29:05.738 [2024-05-15 15:47:18.593131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.738 [2024-05-15 15:47:18.593153] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9d80, cid 4, qid 0 00:29:05.738 [2024-05-15 15:47:18.597228] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:05.738 [2024-05-15 15:47:18.597245] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:05.738 [2024-05-15 15:47:18.597252] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.597258] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb82450): datao=0, datal=4096, cccid=4 00:29:05.738 [2024-05-15 15:47:18.597281] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbe9d80) on tqpair(0xb82450): expected_datao=0, payload_size=4096 00:29:05.738 [2024-05-15 15:47:18.597289] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.597299] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.597307] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.597316] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.738 [2024-05-15 15:47:18.597324] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.738 [2024-05-15 15:47:18.597331] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.597338] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9d80) on tqpair=0xb82450 00:29:05.738 [2024-05-15 15:47:18.597353] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:05.738 [2024-05-15 15:47:18.597369] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:29:05.738 [2024-05-15 15:47:18.597402] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:29:05.738 [2024-05-15 15:47:18.597414] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:05.738 [2024-05-15 15:47:18.597423] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:29:05.738 [2024-05-15 15:47:18.597432] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:29:05.738 [2024-05-15 15:47:18.597440] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:29:05.738 [2024-05-15 15:47:18.597449] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:29:05.738 [2024-05-15 15:47:18.597472] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.597482] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb82450) 00:29:05.738 [2024-05-15 15:47:18.597494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.738 [2024-05-15 15:47:18.597505] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.597513] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.597520] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb82450) 00:29:05.738 [2024-05-15 15:47:18.597529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:05.738 [2024-05-15 15:47:18.597571] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9d80, cid 4, qid 0 00:29:05.738 [2024-05-15 15:47:18.597583] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9ee0, cid 5, qid 0 00:29:05.738 [2024-05-15 15:47:18.597720] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.738 [2024-05-15 15:47:18.597736] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.738 [2024-05-15 15:47:18.597743] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.597750] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9d80) on tqpair=0xb82450 00:29:05.738 [2024-05-15 15:47:18.597761] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.738 [2024-05-15 15:47:18.597771] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.738 [2024-05-15 15:47:18.597778] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.597785] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9ee0) on tqpair=0xb82450 00:29:05.738 [2024-05-15 15:47:18.597801] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.597811] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb82450) 00:29:05.738 [2024-05-15 15:47:18.597821] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.738 [2024-05-15 15:47:18.597844] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9ee0, cid 5, qid 0 00:29:05.738 [2024-05-15 15:47:18.598003] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.738 [2024-05-15 15:47:18.598016] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.738 [2024-05-15 15:47:18.598023] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.598030] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9ee0) on tqpair=0xb82450 00:29:05.738 [2024-05-15 15:47:18.598046] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.598056] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb82450) 00:29:05.738 [2024-05-15 15:47:18.598067] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.738 [2024-05-15 15:47:18.598088] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9ee0, cid 5, qid 0 00:29:05.738 [2024-05-15 15:47:18.598194] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.738 [2024-05-15 15:47:18.598206] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.738 [2024-05-15 15:47:18.598213] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.598229] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9ee0) on tqpair=0xb82450 00:29:05.738 [2024-05-15 15:47:18.598245] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.598255] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb82450) 00:29:05.738 [2024-05-15 15:47:18.598266] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.738 [2024-05-15 15:47:18.598288] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9ee0, cid 5, qid 0 00:29:05.738 [2024-05-15 15:47:18.598394] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.738 [2024-05-15 15:47:18.598406] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.738 [2024-05-15 15:47:18.598413] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.598420] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9ee0) on tqpair=0xb82450 00:29:05.738 [2024-05-15 15:47:18.598439] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.598450] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb82450) 00:29:05.738 [2024-05-15 15:47:18.598461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.738 [2024-05-15 15:47:18.598473] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.598484] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb82450) 00:29:05.738 [2024-05-15 15:47:18.598495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.738 [2024-05-15 15:47:18.598507] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.598515] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xb82450) 00:29:05.738 [2024-05-15 15:47:18.598524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.738 [2024-05-15 15:47:18.598536] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.598544] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb82450) 00:29:05.738 [2024-05-15 15:47:18.598554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.738 [2024-05-15 15:47:18.598576] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9ee0, cid 5, qid 0 00:29:05.738 [2024-05-15 15:47:18.598587] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9d80, cid 4, qid 0 00:29:05.738 [2024-05-15 15:47:18.598595] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbea040, cid 6, qid 0 00:29:05.738 [2024-05-15 15:47:18.598603] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbea1a0, cid 7, qid 0 00:29:05.738 [2024-05-15 15:47:18.598824] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:05.738 [2024-05-15 15:47:18.598840] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:05.738 [2024-05-15 15:47:18.598847] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.598853] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb82450): datao=0, datal=8192, cccid=5 00:29:05.738 [2024-05-15 15:47:18.598861] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbe9ee0) on tqpair(0xb82450): expected_datao=0, payload_size=8192 00:29:05.738 [2024-05-15 15:47:18.598869] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.598879] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.598888] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.598896] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:05.738 [2024-05-15 15:47:18.598905] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:05.738 [2024-05-15 15:47:18.598912] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.598919] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb82450): datao=0, datal=512, cccid=4 00:29:05.738 [2024-05-15 15:47:18.598926] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbe9d80) on tqpair(0xb82450): expected_datao=0, payload_size=512 00:29:05.738 [2024-05-15 15:47:18.598934] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.598944] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.598951] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.598959] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:05.738 [2024-05-15 15:47:18.598968] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:05.738 [2024-05-15 15:47:18.598975] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.598981] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb82450): datao=0, datal=512, cccid=6 00:29:05.738 [2024-05-15 15:47:18.598989] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbea040) on tqpair(0xb82450): expected_datao=0, payload_size=512 00:29:05.738 [2024-05-15 15:47:18.598997] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.599006] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.599017] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.599027] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:05.738 [2024-05-15 15:47:18.599036] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:05.738 [2024-05-15 15:47:18.599043] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.599049] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb82450): datao=0, datal=4096, cccid=7 00:29:05.738 [2024-05-15 15:47:18.599057] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbea1a0) on tqpair(0xb82450): expected_datao=0, payload_size=4096 00:29:05.738 [2024-05-15 15:47:18.599064] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.599074] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.599081] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.599093] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.738 [2024-05-15 15:47:18.599103] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.738 [2024-05-15 15:47:18.599110] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.599117] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9ee0) on tqpair=0xb82450 00:29:05.738 [2024-05-15 15:47:18.599136] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.738 [2024-05-15 15:47:18.599147] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.738 [2024-05-15 15:47:18.599154] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.599161] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9d80) on tqpair=0xb82450 00:29:05.738 [2024-05-15 15:47:18.599175] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.738 [2024-05-15 15:47:18.599202] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.738 [2024-05-15 15:47:18.599209] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.599221] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbea040) on tqpair=0xb82450 00:29:05.738 [2024-05-15 15:47:18.599239] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.738 [2024-05-15 15:47:18.599251] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.738 [2024-05-15 15:47:18.599257] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.738 [2024-05-15 15:47:18.599264] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbea1a0) on tqpair=0xb82450 00:29:05.738 ===================================================== 00:29:05.738 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:05.738 ===================================================== 00:29:05.738 Controller Capabilities/Features 00:29:05.738 ================================ 00:29:05.738 Vendor ID: 8086 00:29:05.738 Subsystem Vendor ID: 8086 00:29:05.738 Serial Number: SPDK00000000000001 00:29:05.738 Model Number: SPDK bdev Controller 00:29:05.738 Firmware Version: 24.05 00:29:05.738 Recommended Arb Burst: 6 00:29:05.738 IEEE OUI Identifier: e4 d2 5c 00:29:05.738 Multi-path I/O 00:29:05.738 May have multiple subsystem ports: Yes 00:29:05.738 May have multiple controllers: Yes 00:29:05.738 Associated with SR-IOV VF: No 00:29:05.738 Max Data Transfer Size: 131072 00:29:05.738 Max Number of Namespaces: 32 00:29:05.738 Max Number of I/O Queues: 127 00:29:05.739 NVMe Specification Version (VS): 1.3 00:29:05.739 NVMe Specification Version (Identify): 1.3 00:29:05.739 Maximum Queue Entries: 128 00:29:05.739 Contiguous Queues Required: Yes 00:29:05.739 Arbitration Mechanisms Supported 00:29:05.739 Weighted Round Robin: Not Supported 00:29:05.739 Vendor Specific: Not Supported 00:29:05.739 Reset Timeout: 15000 ms 00:29:05.739 Doorbell Stride: 4 bytes 00:29:05.739 NVM Subsystem Reset: Not Supported 00:29:05.739 Command Sets Supported 00:29:05.739 NVM Command Set: Supported 00:29:05.739 Boot Partition: Not Supported 00:29:05.739 Memory Page Size Minimum: 4096 bytes 00:29:05.739 Memory Page Size Maximum: 4096 bytes 00:29:05.739 Persistent Memory Region: Not Supported 00:29:05.739 Optional Asynchronous Events Supported 00:29:05.739 Namespace Attribute Notices: Supported 00:29:05.739 Firmware Activation Notices: Not Supported 00:29:05.739 ANA Change Notices: Not Supported 00:29:05.739 PLE Aggregate Log Change Notices: Not Supported 00:29:05.739 LBA Status Info Alert Notices: Not Supported 00:29:05.739 EGE Aggregate Log Change Notices: Not Supported 00:29:05.739 Normal NVM Subsystem Shutdown event: Not Supported 00:29:05.739 Zone Descriptor Change Notices: Not Supported 00:29:05.739 Discovery Log Change Notices: Not Supported 00:29:05.739 Controller Attributes 00:29:05.739 128-bit Host Identifier: Supported 00:29:05.739 Non-Operational Permissive Mode: Not Supported 00:29:05.739 NVM Sets: Not Supported 00:29:05.739 Read Recovery Levels: Not Supported 00:29:05.739 Endurance Groups: Not Supported 00:29:05.739 Predictable Latency Mode: Not Supported 00:29:05.739 Traffic Based Keep ALive: Not Supported 00:29:05.739 Namespace Granularity: Not Supported 00:29:05.739 SQ Associations: Not Supported 00:29:05.739 UUID List: Not Supported 00:29:05.739 Multi-Domain Subsystem: Not Supported 00:29:05.739 Fixed Capacity Management: Not Supported 00:29:05.739 Variable Capacity Management: Not Supported 00:29:05.739 Delete Endurance Group: Not Supported 00:29:05.739 Delete NVM Set: Not Supported 00:29:05.739 Extended LBA Formats Supported: Not Supported 00:29:05.739 Flexible Data Placement Supported: Not Supported 00:29:05.739 00:29:05.739 Controller Memory Buffer Support 00:29:05.739 ================================ 00:29:05.739 Supported: No 00:29:05.739 00:29:05.739 Persistent Memory Region Support 00:29:05.739 ================================ 00:29:05.739 Supported: No 00:29:05.739 00:29:05.739 Admin Command Set Attributes 00:29:05.739 ============================ 00:29:05.739 Security Send/Receive: Not Supported 00:29:05.739 Format NVM: Not Supported 00:29:05.739 Firmware Activate/Download: Not Supported 00:29:05.739 Namespace Management: Not Supported 00:29:05.739 Device Self-Test: Not Supported 00:29:05.739 Directives: Not Supported 00:29:05.739 NVMe-MI: Not Supported 00:29:05.739 Virtualization Management: Not Supported 00:29:05.739 Doorbell Buffer Config: Not Supported 00:29:05.739 Get LBA Status Capability: Not Supported 00:29:05.739 Command & Feature Lockdown Capability: Not Supported 00:29:05.739 Abort Command Limit: 4 00:29:05.739 Async Event Request Limit: 4 00:29:05.739 Number of Firmware Slots: N/A 00:29:05.739 Firmware Slot 1 Read-Only: N/A 00:29:05.739 Firmware Activation Without Reset: N/A 00:29:05.739 Multiple Update Detection Support: N/A 00:29:05.739 Firmware Update Granularity: No Information Provided 00:29:05.739 Per-Namespace SMART Log: No 00:29:05.739 Asymmetric Namespace Access Log Page: Not Supported 00:29:05.739 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:05.739 Command Effects Log Page: Supported 00:29:05.739 Get Log Page Extended Data: Supported 00:29:05.739 Telemetry Log Pages: Not Supported 00:29:05.739 Persistent Event Log Pages: Not Supported 00:29:05.739 Supported Log Pages Log Page: May Support 00:29:05.739 Commands Supported & Effects Log Page: Not Supported 00:29:05.739 Feature Identifiers & Effects Log Page:May Support 00:29:05.739 NVMe-MI Commands & Effects Log Page: May Support 00:29:05.739 Data Area 4 for Telemetry Log: Not Supported 00:29:05.739 Error Log Page Entries Supported: 128 00:29:05.739 Keep Alive: Supported 00:29:05.739 Keep Alive Granularity: 10000 ms 00:29:05.739 00:29:05.739 NVM Command Set Attributes 00:29:05.739 ========================== 00:29:05.739 Submission Queue Entry Size 00:29:05.739 Max: 64 00:29:05.739 Min: 64 00:29:05.739 Completion Queue Entry Size 00:29:05.739 Max: 16 00:29:05.739 Min: 16 00:29:05.739 Number of Namespaces: 32 00:29:05.739 Compare Command: Supported 00:29:05.739 Write Uncorrectable Command: Not Supported 00:29:05.739 Dataset Management Command: Supported 00:29:05.739 Write Zeroes Command: Supported 00:29:05.739 Set Features Save Field: Not Supported 00:29:05.739 Reservations: Supported 00:29:05.739 Timestamp: Not Supported 00:29:05.739 Copy: Supported 00:29:05.739 Volatile Write Cache: Present 00:29:05.739 Atomic Write Unit (Normal): 1 00:29:05.739 Atomic Write Unit (PFail): 1 00:29:05.739 Atomic Compare & Write Unit: 1 00:29:05.739 Fused Compare & Write: Supported 00:29:05.739 Scatter-Gather List 00:29:05.739 SGL Command Set: Supported 00:29:05.739 SGL Keyed: Supported 00:29:05.739 SGL Bit Bucket Descriptor: Not Supported 00:29:05.739 SGL Metadata Pointer: Not Supported 00:29:05.739 Oversized SGL: Not Supported 00:29:05.739 SGL Metadata Address: Not Supported 00:29:05.739 SGL Offset: Supported 00:29:05.739 Transport SGL Data Block: Not Supported 00:29:05.739 Replay Protected Memory Block: Not Supported 00:29:05.739 00:29:05.739 Firmware Slot Information 00:29:05.739 ========================= 00:29:05.739 Active slot: 1 00:29:05.739 Slot 1 Firmware Revision: 24.05 00:29:05.739 00:29:05.739 00:29:05.739 Commands Supported and Effects 00:29:05.739 ============================== 00:29:05.739 Admin Commands 00:29:05.739 -------------- 00:29:05.739 Get Log Page (02h): Supported 00:29:05.739 Identify (06h): Supported 00:29:05.739 Abort (08h): Supported 00:29:05.739 Set Features (09h): Supported 00:29:05.739 Get Features (0Ah): Supported 00:29:05.739 Asynchronous Event Request (0Ch): Supported 00:29:05.739 Keep Alive (18h): Supported 00:29:05.739 I/O Commands 00:29:05.739 ------------ 00:29:05.739 Flush (00h): Supported LBA-Change 00:29:05.739 Write (01h): Supported LBA-Change 00:29:05.739 Read (02h): Supported 00:29:05.739 Compare (05h): Supported 00:29:05.739 Write Zeroes (08h): Supported LBA-Change 00:29:05.739 Dataset Management (09h): Supported LBA-Change 00:29:05.739 Copy (19h): Supported LBA-Change 00:29:05.739 Unknown (79h): Supported LBA-Change 00:29:05.739 Unknown (7Ah): Supported 00:29:05.739 00:29:05.739 Error Log 00:29:05.739 ========= 00:29:05.739 00:29:05.739 Arbitration 00:29:05.739 =========== 00:29:05.739 Arbitration Burst: 1 00:29:05.739 00:29:05.739 Power Management 00:29:05.739 ================ 00:29:05.739 Number of Power States: 1 00:29:05.739 Current Power State: Power State #0 00:29:05.739 Power State #0: 00:29:05.739 Max Power: 0.00 W 00:29:05.739 Non-Operational State: Operational 00:29:05.739 Entry Latency: Not Reported 00:29:05.739 Exit Latency: Not Reported 00:29:05.739 Relative Read Throughput: 0 00:29:05.739 Relative Read Latency: 0 00:29:05.739 Relative Write Throughput: 0 00:29:05.739 Relative Write Latency: 0 00:29:05.739 Idle Power: Not Reported 00:29:05.739 Active Power: Not Reported 00:29:05.739 Non-Operational Permissive Mode: Not Supported 00:29:05.739 00:29:05.739 Health Information 00:29:05.739 ================== 00:29:05.739 Critical Warnings: 00:29:05.739 Available Spare Space: OK 00:29:05.739 Temperature: OK 00:29:05.739 Device Reliability: OK 00:29:05.739 Read Only: No 00:29:05.739 Volatile Memory Backup: OK 00:29:05.739 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:05.739 Temperature Threshold: [2024-05-15 15:47:18.599394] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.739 [2024-05-15 15:47:18.599407] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb82450) 00:29:05.739 [2024-05-15 15:47:18.599418] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.739 [2024-05-15 15:47:18.599442] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbea1a0, cid 7, qid 0 00:29:05.739 [2024-05-15 15:47:18.599601] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.739 [2024-05-15 15:47:18.599616] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.739 [2024-05-15 15:47:18.599623] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.739 [2024-05-15 15:47:18.599630] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbea1a0) on tqpair=0xb82450 00:29:05.739 [2024-05-15 15:47:18.599674] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:29:05.739 [2024-05-15 15:47:18.599696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.739 [2024-05-15 15:47:18.599708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.739 [2024-05-15 15:47:18.599719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.739 [2024-05-15 15:47:18.599733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.739 [2024-05-15 15:47:18.599747] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.739 [2024-05-15 15:47:18.599755] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.739 [2024-05-15 15:47:18.599762] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb82450) 00:29:05.739 [2024-05-15 15:47:18.599773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.739 [2024-05-15 15:47:18.599796] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9c20, cid 3, qid 0 00:29:05.739 [2024-05-15 15:47:18.599903] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.739 [2024-05-15 15:47:18.599918] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.739 [2024-05-15 15:47:18.599925] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.739 [2024-05-15 15:47:18.599932] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9c20) on tqpair=0xb82450 00:29:05.739 [2024-05-15 15:47:18.599943] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.739 [2024-05-15 15:47:18.599952] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.739 [2024-05-15 15:47:18.599958] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb82450) 00:29:05.739 [2024-05-15 15:47:18.599969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.739 [2024-05-15 15:47:18.599996] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9c20, cid 3, qid 0 00:29:05.739 [2024-05-15 15:47:18.600126] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.739 [2024-05-15 15:47:18.600141] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.739 [2024-05-15 15:47:18.600148] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.739 [2024-05-15 15:47:18.600155] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9c20) on tqpair=0xb82450 00:29:05.739 [2024-05-15 15:47:18.600163] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:29:05.739 [2024-05-15 15:47:18.600172] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:29:05.739 [2024-05-15 15:47:18.600189] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.739 [2024-05-15 15:47:18.600198] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.739 [2024-05-15 15:47:18.600205] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb82450) 00:29:05.739 [2024-05-15 15:47:18.600222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.739 [2024-05-15 15:47:18.600245] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9c20, cid 3, qid 0 00:29:05.739 [2024-05-15 15:47:18.600358] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.739 [2024-05-15 15:47:18.600370] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.739 [2024-05-15 15:47:18.600377] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.739 [2024-05-15 15:47:18.600384] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9c20) on tqpair=0xb82450 00:29:05.739 [2024-05-15 15:47:18.600401] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.739 [2024-05-15 15:47:18.600411] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.739 [2024-05-15 15:47:18.600418] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb82450) 00:29:05.739 [2024-05-15 15:47:18.600428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.739 [2024-05-15 15:47:18.600450] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9c20, cid 3, qid 0 00:29:05.739 [2024-05-15 15:47:18.600559] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.739 [2024-05-15 15:47:18.600572] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.739 [2024-05-15 15:47:18.600579] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.739 [2024-05-15 15:47:18.600586] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9c20) on tqpair=0xb82450 00:29:05.739 [2024-05-15 15:47:18.600602] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.739 [2024-05-15 15:47:18.600612] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.739 [2024-05-15 15:47:18.600619] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb82450) 00:29:05.739 [2024-05-15 15:47:18.600630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.739 [2024-05-15 15:47:18.600651] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9c20, cid 3, qid 0 00:29:05.739 [2024-05-15 15:47:18.600752] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.739 [2024-05-15 15:47:18.600767] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.739 [2024-05-15 15:47:18.600774] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.739 [2024-05-15 15:47:18.600781] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9c20) on tqpair=0xb82450 00:29:05.739 [2024-05-15 15:47:18.600798] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.739 [2024-05-15 15:47:18.600808] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.739 [2024-05-15 15:47:18.600815] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb82450) 00:29:05.739 [2024-05-15 15:47:18.600826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.739 [2024-05-15 15:47:18.600847] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9c20, cid 3, qid 0 00:29:05.739 [2024-05-15 15:47:18.600957] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.739 [2024-05-15 15:47:18.600969] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.739 [2024-05-15 15:47:18.600976] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.739 [2024-05-15 15:47:18.600983] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9c20) on tqpair=0xb82450 00:29:05.739 [2024-05-15 15:47:18.600999] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.739 [2024-05-15 15:47:18.601009] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.740 [2024-05-15 15:47:18.601016] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb82450) 00:29:05.740 [2024-05-15 15:47:18.601026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.740 [2024-05-15 15:47:18.601047] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9c20, cid 3, qid 0 00:29:05.740 [2024-05-15 15:47:18.601151] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.740 [2024-05-15 15:47:18.601163] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.740 [2024-05-15 15:47:18.601170] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.740 [2024-05-15 15:47:18.601177] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9c20) on tqpair=0xb82450 00:29:05.740 [2024-05-15 15:47:18.601194] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.740 [2024-05-15 15:47:18.601203] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.740 [2024-05-15 15:47:18.601210] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb82450) 00:29:05.740 [2024-05-15 15:47:18.605245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.740 [2024-05-15 15:47:18.605272] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe9c20, cid 3, qid 0 00:29:05.740 [2024-05-15 15:47:18.605434] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.740 [2024-05-15 15:47:18.605451] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.740 [2024-05-15 15:47:18.605459] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.740 [2024-05-15 15:47:18.605466] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbe9c20) on tqpair=0xb82450 00:29:05.740 [2024-05-15 15:47:18.605480] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:29:05.740 0 Kelvin (-273 Celsius) 00:29:05.740 Available Spare: 0% 00:29:05.740 Available Spare Threshold: 0% 00:29:05.740 Life Percentage Used: 0% 00:29:05.740 Data Units Read: 0 00:29:05.740 Data Units Written: 0 00:29:05.740 Host Read Commands: 0 00:29:05.740 Host Write Commands: 0 00:29:05.740 Controller Busy Time: 0 minutes 00:29:05.740 Power Cycles: 0 00:29:05.740 Power On Hours: 0 hours 00:29:05.740 Unsafe Shutdowns: 0 00:29:05.740 Unrecoverable Media Errors: 0 00:29:05.740 Lifetime Error Log Entries: 0 00:29:05.740 Warning Temperature Time: 0 minutes 00:29:05.740 Critical Temperature Time: 0 minutes 00:29:05.740 00:29:05.740 Number of Queues 00:29:05.740 ================ 00:29:05.740 Number of I/O Submission Queues: 127 00:29:05.740 Number of I/O Completion Queues: 127 00:29:05.740 00:29:05.740 Active Namespaces 00:29:05.740 ================= 00:29:05.740 Namespace ID:1 00:29:05.740 Error Recovery Timeout: Unlimited 00:29:05.740 Command Set Identifier: NVM (00h) 00:29:05.740 Deallocate: Supported 00:29:05.740 Deallocated/Unwritten Error: Not Supported 00:29:05.740 Deallocated Read Value: Unknown 00:29:05.740 Deallocate in Write Zeroes: Not Supported 00:29:05.740 Deallocated Guard Field: 0xFFFF 00:29:05.740 Flush: Supported 00:29:05.740 Reservation: Supported 00:29:05.740 Namespace Sharing Capabilities: Multiple Controllers 00:29:05.740 Size (in LBAs): 131072 (0GiB) 00:29:05.740 Capacity (in LBAs): 131072 (0GiB) 00:29:05.740 Utilization (in LBAs): 131072 (0GiB) 00:29:05.740 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:05.740 EUI64: ABCDEF0123456789 00:29:05.740 UUID: 090fd109-cec3-40c6-8c22-5bf609e797db 00:29:05.740 Thin Provisioning: Not Supported 00:29:05.740 Per-NS Atomic Units: Yes 00:29:05.740 Atomic Boundary Size (Normal): 0 00:29:05.740 Atomic Boundary Size (PFail): 0 00:29:05.740 Atomic Boundary Offset: 0 00:29:05.740 Maximum Single Source Range Length: 65535 00:29:05.740 Maximum Copy Length: 65535 00:29:05.740 Maximum Source Range Count: 1 00:29:05.740 NGUID/EUI64 Never Reused: No 00:29:05.740 Namespace Write Protected: No 00:29:05.740 Number of LBA Formats: 1 00:29:05.740 Current LBA Format: LBA Format #00 00:29:05.740 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:05.740 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:05.740 rmmod nvme_tcp 00:29:05.740 rmmod nvme_fabrics 00:29:05.740 rmmod nvme_keyring 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1413905 ']' 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1413905 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 1413905 ']' 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 1413905 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1413905 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1413905' 00:29:05.740 killing process with pid 1413905 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 1413905 00:29:05.740 [2024-05-15 15:47:18.706389] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:05.740 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 1413905 00:29:05.998 15:47:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:05.998 15:47:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:05.998 15:47:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:05.998 15:47:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:05.998 15:47:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:05.998 15:47:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.998 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:05.998 15:47:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.898 15:47:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:08.156 00:29:08.156 real 0m5.696s 00:29:08.156 user 0m4.241s 00:29:08.156 sys 0m2.168s 00:29:08.156 15:47:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:08.156 15:47:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:08.156 ************************************ 00:29:08.156 END TEST nvmf_identify 00:29:08.156 ************************************ 00:29:08.156 15:47:21 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:08.156 15:47:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:08.156 15:47:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:08.156 15:47:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:08.156 ************************************ 00:29:08.156 START TEST nvmf_perf 00:29:08.156 ************************************ 00:29:08.156 15:47:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:08.156 * Looking for test storage... 00:29:08.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:08.156 15:47:21 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:08.156 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:08.156 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:08.156 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:08.157 15:47:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:10.683 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:10.683 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:10.683 Found net devices under 0000:09:00.0: cvl_0_0 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:10.683 Found net devices under 0000:09:00.1: cvl_0_1 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:10.683 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:10.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:10.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:29:10.684 00:29:10.684 --- 10.0.0.2 ping statistics --- 00:29:10.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.684 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:10.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:10.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:29:10.684 00:29:10.684 --- 10.0.0.1 ping statistics --- 00:29:10.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.684 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1416276 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1416276 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 1416276 ']' 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:10.684 15:47:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:10.942 [2024-05-15 15:47:23.797020] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:29:10.942 [2024-05-15 15:47:23.797103] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.942 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.942 [2024-05-15 15:47:23.839577] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:10.942 [2024-05-15 15:47:23.870478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:10.942 [2024-05-15 15:47:23.952890] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.942 [2024-05-15 15:47:23.952942] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.942 [2024-05-15 15:47:23.952970] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:10.942 [2024-05-15 15:47:23.952982] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:10.942 [2024-05-15 15:47:23.952991] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.942 [2024-05-15 15:47:23.953085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.942 [2024-05-15 15:47:23.953142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:10.942 [2024-05-15 15:47:23.953227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.942 [2024-05-15 15:47:23.953228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:11.199 15:47:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:11.199 15:47:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:29:11.199 15:47:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:11.199 15:47:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:11.199 15:47:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:11.199 15:47:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:11.199 15:47:24 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:11.199 15:47:24 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:14.477 15:47:27 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:14.477 15:47:27 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:14.477 15:47:27 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:29:14.477 15:47:27 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:14.733 15:47:27 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:14.733 15:47:27 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:29:14.733 15:47:27 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:14.733 15:47:27 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:14.733 15:47:27 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:14.989 [2024-05-15 15:47:27.895484] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.989 15:47:27 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:15.245 15:47:28 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:15.245 15:47:28 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:15.501 15:47:28 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:15.501 15:47:28 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:15.758 15:47:28 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:16.015 [2024-05-15 15:47:28.870959] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:16.015 [2024-05-15 15:47:28.871294] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:16.015 15:47:28 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:16.273 15:47:29 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:29:16.273 15:47:29 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:29:16.273 15:47:29 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:16.273 15:47:29 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:29:17.643 Initializing NVMe Controllers 00:29:17.643 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:29:17.643 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:29:17.643 Initialization complete. Launching workers. 00:29:17.643 ======================================================== 00:29:17.643 Latency(us) 00:29:17.643 Device Information : IOPS MiB/s Average min max 00:29:17.643 PCIE (0000:0b:00.0) NSID 1 from core 0: 84922.84 331.73 376.24 42.59 5243.04 00:29:17.643 ======================================================== 00:29:17.643 Total : 84922.84 331.73 376.24 42.59 5243.04 00:29:17.643 00:29:17.643 15:47:30 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:17.643 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.575 Initializing NVMe Controllers 00:29:18.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:18.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:18.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:18.575 Initialization complete. Launching workers. 00:29:18.575 ======================================================== 00:29:18.575 Latency(us) 00:29:18.575 Device Information : IOPS MiB/s Average min max 00:29:18.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.71 0.32 12488.99 166.18 45050.07 00:29:18.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.78 0.24 16313.91 7103.47 47894.68 00:29:18.575 ======================================================== 00:29:18.575 Total : 144.49 0.56 14124.47 166.18 47894.68 00:29:18.575 00:29:18.575 15:47:31 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:18.575 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.947 Initializing NVMe Controllers 00:29:19.947 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:19.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:19.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:19.947 Initialization complete. Launching workers. 00:29:19.947 ======================================================== 00:29:19.947 Latency(us) 00:29:19.947 Device Information : IOPS MiB/s Average min max 00:29:19.947 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8500.99 33.21 3765.30 555.16 7502.16 00:29:19.947 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3895.00 15.21 8270.83 6907.50 15720.18 00:29:19.947 ======================================================== 00:29:19.947 Total : 12395.99 48.42 5181.00 555.16 15720.18 00:29:19.947 00:29:19.947 15:47:32 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:19.947 15:47:32 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:19.947 15:47:32 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:19.947 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.491 Initializing NVMe Controllers 00:29:22.491 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:22.491 Controller IO queue size 128, less than required. 00:29:22.491 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:22.491 Controller IO queue size 128, less than required. 00:29:22.491 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:22.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:22.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:22.491 Initialization complete. Launching workers. 00:29:22.491 ======================================================== 00:29:22.491 Latency(us) 00:29:22.491 Device Information : IOPS MiB/s Average min max 00:29:22.491 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1374.29 343.57 94619.45 64330.12 138570.65 00:29:22.491 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 623.90 155.98 215216.68 79272.17 314395.56 00:29:22.491 ======================================================== 00:29:22.491 Total : 1998.19 499.55 132274.03 64330.12 314395.56 00:29:22.491 00:29:22.491 15:47:35 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:22.491 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.491 No valid NVMe controllers or AIO or URING devices found 00:29:22.491 Initializing NVMe Controllers 00:29:22.491 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:22.491 Controller IO queue size 128, less than required. 00:29:22.491 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:22.491 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:22.491 Controller IO queue size 128, less than required. 00:29:22.491 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:22.491 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:22.491 WARNING: Some requested NVMe devices were skipped 00:29:22.748 15:47:35 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:22.748 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.273 Initializing NVMe Controllers 00:29:25.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:25.273 Controller IO queue size 128, less than required. 00:29:25.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.273 Controller IO queue size 128, less than required. 00:29:25.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:25.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:25.273 Initialization complete. Launching workers. 00:29:25.273 00:29:25.273 ==================== 00:29:25.273 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:25.273 TCP transport: 00:29:25.273 polls: 13489 00:29:25.273 idle_polls: 6564 00:29:25.273 sock_completions: 6925 00:29:25.273 nvme_completions: 5667 00:29:25.273 submitted_requests: 8504 00:29:25.273 queued_requests: 1 00:29:25.273 00:29:25.273 ==================== 00:29:25.273 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:25.273 TCP transport: 00:29:25.273 polls: 12944 00:29:25.273 idle_polls: 5644 00:29:25.273 sock_completions: 7300 00:29:25.273 nvme_completions: 5387 00:29:25.273 submitted_requests: 8096 00:29:25.273 queued_requests: 1 00:29:25.273 ======================================================== 00:29:25.273 Latency(us) 00:29:25.273 Device Information : IOPS MiB/s Average min max 00:29:25.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1415.35 353.84 93063.50 48022.72 162998.99 00:29:25.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1345.41 336.35 96045.57 48240.35 122108.68 00:29:25.273 ======================================================== 00:29:25.273 Total : 2760.76 690.19 94516.76 48022.72 162998.99 00:29:25.273 00:29:25.273 15:47:38 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:25.273 15:47:38 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:25.531 15:47:38 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:25.531 15:47:38 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:0b:00.0 ']' 00:29:25.531 15:47:38 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:29.708 15:47:41 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=4cb9a563-3ea2-408e-8bdf-26f38fb61879 00:29:29.708 15:47:41 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 4cb9a563-3ea2-408e-8bdf-26f38fb61879 00:29:29.708 15:47:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=4cb9a563-3ea2-408e-8bdf-26f38fb61879 00:29:29.708 15:47:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:29:29.708 15:47:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:29:29.708 15:47:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:29:29.708 15:47:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:29.708 15:47:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:29:29.708 { 00:29:29.708 "uuid": "4cb9a563-3ea2-408e-8bdf-26f38fb61879", 00:29:29.708 "name": "lvs_0", 00:29:29.708 "base_bdev": "Nvme0n1", 00:29:29.708 "total_data_clusters": 238234, 00:29:29.708 "free_clusters": 238234, 00:29:29.708 "block_size": 512, 00:29:29.708 "cluster_size": 4194304 00:29:29.708 } 00:29:29.708 ]' 00:29:29.708 15:47:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="4cb9a563-3ea2-408e-8bdf-26f38fb61879") .free_clusters' 00:29:29.708 15:47:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=238234 00:29:29.708 15:47:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="4cb9a563-3ea2-408e-8bdf-26f38fb61879") .cluster_size' 00:29:29.708 15:47:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:29:29.708 15:47:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=952936 00:29:29.708 15:47:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 952936 00:29:29.708 952936 00:29:29.708 15:47:42 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:29:29.708 15:47:42 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:29.708 15:47:42 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4cb9a563-3ea2-408e-8bdf-26f38fb61879 lbd_0 20480 00:29:29.708 15:47:42 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=3a8c843a-4fdf-4205-b394-a3044df24111 00:29:29.708 15:47:42 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 3a8c843a-4fdf-4205-b394-a3044df24111 lvs_n_0 00:29:30.639 15:47:43 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=570e3bdf-6466-422c-b330-b77f26a44692 00:29:30.639 15:47:43 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 570e3bdf-6466-422c-b330-b77f26a44692 00:29:30.639 15:47:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=570e3bdf-6466-422c-b330-b77f26a44692 00:29:30.639 15:47:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:29:30.639 15:47:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:29:30.639 15:47:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:29:30.639 15:47:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:30.639 15:47:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:29:30.639 { 00:29:30.639 "uuid": "4cb9a563-3ea2-408e-8bdf-26f38fb61879", 00:29:30.639 "name": "lvs_0", 00:29:30.639 "base_bdev": "Nvme0n1", 00:29:30.639 "total_data_clusters": 238234, 00:29:30.639 "free_clusters": 233114, 00:29:30.639 "block_size": 512, 00:29:30.639 "cluster_size": 4194304 00:29:30.639 }, 00:29:30.639 { 00:29:30.639 "uuid": "570e3bdf-6466-422c-b330-b77f26a44692", 00:29:30.639 "name": "lvs_n_0", 00:29:30.639 "base_bdev": "3a8c843a-4fdf-4205-b394-a3044df24111", 00:29:30.639 "total_data_clusters": 5114, 00:29:30.639 "free_clusters": 5114, 00:29:30.639 "block_size": 512, 00:29:30.639 "cluster_size": 4194304 00:29:30.639 } 00:29:30.639 ]' 00:29:30.639 15:47:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="570e3bdf-6466-422c-b330-b77f26a44692") .free_clusters' 00:29:30.639 15:47:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:29:30.639 15:47:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="570e3bdf-6466-422c-b330-b77f26a44692") .cluster_size' 00:29:30.896 15:47:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:29:30.896 15:47:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:29:30.896 15:47:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:29:30.896 20456 00:29:30.896 15:47:43 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:30.896 15:47:43 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 570e3bdf-6466-422c-b330-b77f26a44692 lbd_nest_0 20456 00:29:31.153 15:47:44 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=1456b69e-cddd-4b39-9068-b8968c8e9f38 00:29:31.153 15:47:44 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:31.410 15:47:44 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:31.410 15:47:44 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 1456b69e-cddd-4b39-9068-b8968c8e9f38 00:29:31.667 15:47:44 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:31.667 15:47:44 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:31.667 15:47:44 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:31.667 15:47:44 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:31.667 15:47:44 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:31.667 15:47:44 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:31.924 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.105 Initializing NVMe Controllers 00:29:44.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:44.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:44.106 Initialization complete. Launching workers. 00:29:44.106 ======================================================== 00:29:44.106 Latency(us) 00:29:44.106 Device Information : IOPS MiB/s Average min max 00:29:44.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.10 0.02 21757.80 193.38 45947.28 00:29:44.106 ======================================================== 00:29:44.106 Total : 46.10 0.02 21757.80 193.38 45947.28 00:29:44.106 00:29:44.106 15:47:55 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:44.106 15:47:55 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:44.106 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.066 Initializing NVMe Controllers 00:29:54.066 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:54.066 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:54.066 Initialization complete. Launching workers. 00:29:54.066 ======================================================== 00:29:54.066 Latency(us) 00:29:54.066 Device Information : IOPS MiB/s Average min max 00:29:54.066 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 83.30 10.41 12010.88 5799.35 47883.80 00:29:54.066 ======================================================== 00:29:54.066 Total : 83.30 10.41 12010.88 5799.35 47883.80 00:29:54.066 00:29:54.066 15:48:05 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:54.066 15:48:05 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:54.066 15:48:05 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:54.066 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.049 Initializing NVMe Controllers 00:30:04.049 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:04.049 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:04.049 Initialization complete. Launching workers. 00:30:04.049 ======================================================== 00:30:04.049 Latency(us) 00:30:04.049 Device Information : IOPS MiB/s Average min max 00:30:04.049 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7429.50 3.63 4319.72 293.47 47993.76 00:30:04.049 ======================================================== 00:30:04.049 Total : 7429.50 3.63 4319.72 293.47 47993.76 00:30:04.049 00:30:04.049 15:48:15 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:04.049 15:48:15 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:04.049 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.048 Initializing NVMe Controllers 00:30:14.048 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:14.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:14.048 Initialization complete. Launching workers. 00:30:14.048 ======================================================== 00:30:14.048 Latency(us) 00:30:14.048 Device Information : IOPS MiB/s Average min max 00:30:14.048 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2693.85 336.73 11878.02 608.75 26129.88 00:30:14.048 ======================================================== 00:30:14.048 Total : 2693.85 336.73 11878.02 608.75 26129.88 00:30:14.048 00:30:14.048 15:48:26 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:14.048 15:48:26 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:14.048 15:48:26 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:14.048 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.007 Initializing NVMe Controllers 00:30:24.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:24.007 Controller IO queue size 128, less than required. 00:30:24.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:24.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:24.007 Initialization complete. Launching workers. 00:30:24.007 ======================================================== 00:30:24.007 Latency(us) 00:30:24.007 Device Information : IOPS MiB/s Average min max 00:30:24.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11922.22 5.82 10738.33 1826.32 27604.28 00:30:24.007 ======================================================== 00:30:24.007 Total : 11922.22 5.82 10738.33 1826.32 27604.28 00:30:24.007 00:30:24.007 15:48:36 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:24.007 15:48:36 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:24.007 EAL: No free 2048 kB hugepages reported on node 1 00:30:36.199 Initializing NVMe Controllers 00:30:36.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:36.199 Controller IO queue size 128, less than required. 00:30:36.199 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:36.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:36.199 Initialization complete. Launching workers. 00:30:36.199 ======================================================== 00:30:36.199 Latency(us) 00:30:36.199 Device Information : IOPS MiB/s Average min max 00:30:36.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1218.27 152.28 105281.90 24771.33 183009.73 00:30:36.199 ======================================================== 00:30:36.199 Total : 1218.27 152.28 105281.90 24771.33 183009.73 00:30:36.199 00:30:36.199 15:48:47 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:36.199 15:48:47 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1456b69e-cddd-4b39-9068-b8968c8e9f38 00:30:36.199 15:48:48 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:36.199 15:48:48 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3a8c843a-4fdf-4205-b394-a3044df24111 00:30:36.199 15:48:48 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:36.199 15:48:49 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:36.199 15:48:49 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:36.199 15:48:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:36.199 15:48:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:30:36.199 15:48:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:36.199 15:48:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:30:36.199 15:48:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:36.199 15:48:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:36.199 rmmod nvme_tcp 00:30:36.199 rmmod nvme_fabrics 00:30:36.199 rmmod nvme_keyring 00:30:36.199 15:48:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:36.199 15:48:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:30:36.199 15:48:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:30:36.199 15:48:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1416276 ']' 00:30:36.199 15:48:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1416276 00:30:36.199 15:48:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 1416276 ']' 00:30:36.199 15:48:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 1416276 00:30:36.199 15:48:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:30:36.199 15:48:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:36.199 15:48:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1416276 00:30:36.199 15:48:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:36.199 15:48:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:36.199 15:48:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1416276' 00:30:36.199 killing process with pid 1416276 00:30:36.199 15:48:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 1416276 00:30:36.199 [2024-05-15 15:48:49.108897] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:36.199 15:48:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 1416276 00:30:37.572 15:48:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:37.572 15:48:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:37.572 15:48:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:37.572 15:48:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:37.572 15:48:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:37.572 15:48:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.572 15:48:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:37.572 15:48:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.140 15:48:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:40.140 00:30:40.140 real 1m31.641s 00:30:40.140 user 5m35.373s 00:30:40.140 sys 0m16.483s 00:30:40.140 15:48:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:40.140 15:48:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:40.140 ************************************ 00:30:40.140 END TEST nvmf_perf 00:30:40.140 ************************************ 00:30:40.140 15:48:52 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:40.140 15:48:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:40.140 15:48:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:40.140 15:48:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:40.140 ************************************ 00:30:40.140 START TEST nvmf_fio_host 00:30:40.140 ************************************ 00:30:40.140 15:48:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:40.140 * Looking for test storage... 00:30:40.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:40.140 15:48:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.140 15:48:52 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.140 15:48:52 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.140 15:48:52 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:30:40.141 15:48:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:42.672 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:42.672 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:42.672 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:42.673 Found net devices under 0000:09:00.0: cvl_0_0 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:42.673 Found net devices under 0000:09:00.1: cvl_0_1 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:42.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:42.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:30:42.673 00:30:42.673 --- 10.0.0.2 ping statistics --- 00:30:42.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.673 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:42.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:42.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:30:42.673 00:30:42.673 --- 10.0.0.1 ping statistics --- 00:30:42.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.673 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=1429260 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 1429260 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 1429260 ']' 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:42.673 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.673 [2024-05-15 15:48:55.497698] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:30:42.673 [2024-05-15 15:48:55.497774] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.673 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.673 [2024-05-15 15:48:55.543305] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:42.673 [2024-05-15 15:48:55.580916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:42.673 [2024-05-15 15:48:55.670695] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:42.673 [2024-05-15 15:48:55.670747] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:42.673 [2024-05-15 15:48:55.670764] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:42.673 [2024-05-15 15:48:55.670778] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:42.673 [2024-05-15 15:48:55.670791] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:42.673 [2024-05-15 15:48:55.670876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.673 [2024-05-15 15:48:55.670953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:42.673 [2024-05-15 15:48:55.671000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:42.673 [2024-05-15 15:48:55.671002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.932 [2024-05-15 15:48:55.792664] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.932 Malloc1 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.932 [2024-05-15 15:48:55.863701] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:42.932 [2024-05-15 15:48:55.864000] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:42.932 15:48:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:43.190 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:43.190 fio-3.35 00:30:43.190 Starting 1 thread 00:30:43.190 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.714 00:30:45.714 test: (groupid=0, jobs=1): err= 0: pid=1429369: Wed May 15 15:48:58 2024 00:30:45.714 read: IOPS=8226, BW=32.1MiB/s (33.7MB/s)(64.5MiB/2008msec) 00:30:45.714 slat (usec): min=2, max=112, avg= 2.46, stdev= 1.50 00:30:45.714 clat (usec): min=2626, max=15167, avg=8582.19, stdev=723.90 00:30:45.714 lat (usec): min=2648, max=15170, avg=8584.65, stdev=723.81 00:30:45.714 clat percentiles (usec): 00:30:45.714 | 1.00th=[ 6980], 5.00th=[ 7439], 10.00th=[ 7701], 20.00th=[ 8029], 00:30:45.714 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8717], 00:30:45.714 | 70.00th=[ 8979], 80.00th=[ 9110], 90.00th=[ 9372], 95.00th=[ 9634], 00:30:45.714 | 99.00th=[10159], 99.50th=[10421], 99.90th=[12780], 99.95th=[13698], 00:30:45.714 | 99.99th=[14877] 00:30:45.714 bw ( KiB/s): min=32248, max=33328, per=100.00%, avg=32906.00, stdev=463.14, samples=4 00:30:45.714 iops : min= 8062, max= 8332, avg=8226.50, stdev=115.79, samples=4 00:30:45.714 write: IOPS=8232, BW=32.2MiB/s (33.7MB/s)(64.6MiB/2008msec); 0 zone resets 00:30:45.714 slat (usec): min=2, max=104, avg= 2.55, stdev= 1.25 00:30:45.714 clat (usec): min=1425, max=14277, avg=6920.43, stdev=618.98 00:30:45.714 lat (usec): min=1431, max=14280, avg=6922.98, stdev=618.94 00:30:45.714 clat percentiles (usec): 00:30:45.714 | 1.00th=[ 5604], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6456], 00:30:45.714 | 30.00th=[ 6652], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 7046], 00:30:45.714 | 70.00th=[ 7177], 80.00th=[ 7373], 90.00th=[ 7570], 95.00th=[ 7767], 00:30:45.714 | 99.00th=[ 8225], 99.50th=[ 8455], 99.90th=[12780], 99.95th=[13698], 00:30:45.714 | 99.99th=[14222] 00:30:45.714 bw ( KiB/s): min=32608, max=33232, per=100.00%, avg=32940.00, stdev=281.52, samples=4 00:30:45.714 iops : min= 8152, max= 8308, avg=8235.00, stdev=70.38, samples=4 00:30:45.714 lat (msec) : 2=0.03%, 4=0.08%, 10=98.92%, 20=0.97% 00:30:45.714 cpu : usr=59.74%, sys=35.72%, ctx=68, majf=0, minf=46 00:30:45.714 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:45.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:45.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:45.714 issued rwts: total=16519,16531,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:45.714 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:45.714 00:30:45.714 Run status group 0 (all jobs): 00:30:45.715 READ: bw=32.1MiB/s (33.7MB/s), 32.1MiB/s-32.1MiB/s (33.7MB/s-33.7MB/s), io=64.5MiB (67.7MB), run=2008-2008msec 00:30:45.715 WRITE: bw=32.2MiB/s (33.7MB/s), 32.2MiB/s-32.2MiB/s (33.7MB/s-33.7MB/s), io=64.6MiB (67.7MB), run=2008-2008msec 00:30:45.715 15:48:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:45.715 15:48:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:45.715 15:48:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:45.715 15:48:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:45.715 15:48:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:45.715 15:48:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:45.715 15:48:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:45.715 15:48:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:45.715 15:48:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:45.715 15:48:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:45.715 15:48:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:45.715 15:48:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:45.715 15:48:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:45.715 15:48:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:45.715 15:48:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:45.715 15:48:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:45.715 15:48:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:45.715 15:48:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:45.715 15:48:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:45.715 15:48:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:45.715 15:48:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:45.715 15:48:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:45.715 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:45.715 fio-3.35 00:30:45.715 Starting 1 thread 00:30:45.715 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.241 00:30:48.241 test: (groupid=0, jobs=1): err= 0: pid=1429757: Wed May 15 15:49:00 2024 00:30:48.241 read: IOPS=8157, BW=127MiB/s (134MB/s)(256MiB/2009msec) 00:30:48.241 slat (nsec): min=3029, max=94106, avg=3864.17, stdev=1849.21 00:30:48.241 clat (usec): min=2774, max=52628, avg=9218.02, stdev=3939.26 00:30:48.241 lat (usec): min=2778, max=52632, avg=9221.88, stdev=3939.32 00:30:48.241 clat percentiles (usec): 00:30:48.241 | 1.00th=[ 4883], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 7242], 00:30:48.241 | 30.00th=[ 7832], 40.00th=[ 8356], 50.00th=[ 8848], 60.00th=[ 9372], 00:30:48.241 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[11600], 95.00th=[12387], 00:30:48.241 | 99.00th=[15270], 99.50th=[46400], 99.90th=[51119], 99.95th=[52167], 00:30:48.241 | 99.99th=[52691] 00:30:48.241 bw ( KiB/s): min=53184, max=76608, per=51.57%, avg=67312.00, stdev=10875.15, samples=4 00:30:48.241 iops : min= 3324, max= 4788, avg=4207.00, stdev=679.70, samples=4 00:30:48.241 write: IOPS=4906, BW=76.7MiB/s (80.4MB/s)(138MiB/1795msec); 0 zone resets 00:30:48.241 slat (usec): min=30, max=278, avg=35.21, stdev= 8.54 00:30:48.241 clat (usec): min=5036, max=20755, avg=11293.97, stdev=2124.34 00:30:48.241 lat (usec): min=5068, max=20830, avg=11329.18, stdev=2126.75 00:30:48.241 clat percentiles (usec): 00:30:48.241 | 1.00th=[ 7177], 5.00th=[ 8291], 10.00th=[ 8848], 20.00th=[ 9503], 00:30:48.241 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11076], 60.00th=[11600], 00:30:48.241 | 70.00th=[12125], 80.00th=[12911], 90.00th=[14091], 95.00th=[15270], 00:30:48.241 | 99.00th=[17433], 99.50th=[18220], 99.90th=[19268], 99.95th=[19530], 00:30:48.241 | 99.99th=[20841] 00:30:48.241 bw ( KiB/s): min=55328, max=79680, per=89.25%, avg=70064.00, stdev=11538.43, samples=4 00:30:48.241 iops : min= 3458, max= 4980, avg=4379.00, stdev=721.15, samples=4 00:30:48.241 lat (msec) : 4=0.06%, 10=57.41%, 20=42.01%, 50=0.39%, 100=0.12% 00:30:48.241 cpu : usr=75.65%, sys=21.51%, ctx=25, majf=0, minf=68 00:30:48.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:30:48.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:48.241 issued rwts: total=16388,8807,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.241 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:48.241 00:30:48.241 Run status group 0 (all jobs): 00:30:48.241 READ: bw=127MiB/s (134MB/s), 127MiB/s-127MiB/s (134MB/s-134MB/s), io=256MiB (269MB), run=2009-2009msec 00:30:48.241 WRITE: bw=76.7MiB/s (80.4MB/s), 76.7MiB/s-76.7MiB/s (80.4MB/s-80.4MB/s), io=138MiB (144MB), run=1795-1795msec 00:30:48.241 15:49:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:48.241 15:49:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.241 15:49:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.241 15:49:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.241 15:49:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:30:48.241 15:49:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:30:48.241 15:49:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # get_nvme_bdfs 00:30:48.241 15:49:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:48.241 15:49:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:30:48.241 15:49:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:48.241 15:49:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:48.241 15:49:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:30:48.241 15:49:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:30:48.241 15:49:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:0b:00.0 00:30:48.241 15:49:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 -i 10.0.0.2 00:30:48.241 15:49:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.241 15:49:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.764 Nvme0n1 00:30:50.764 15:49:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.764 15:49:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:50.764 15:49:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.764 15:49:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # ls_guid=ac063c43-427b-4473-b1b8-37585e58a67c 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # get_lvs_free_mb ac063c43-427b-4473-b1b8-37585e58a67c 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=ac063c43-427b-4473-b1b8-37585e58a67c 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # rpc_cmd bdev_lvol_get_lvstores 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:54.042 { 00:30:54.042 "uuid": "ac063c43-427b-4473-b1b8-37585e58a67c", 00:30:54.042 "name": "lvs_0", 00:30:54.042 "base_bdev": "Nvme0n1", 00:30:54.042 "total_data_clusters": 930, 00:30:54.042 "free_clusters": 930, 00:30:54.042 "block_size": 512, 00:30:54.042 "cluster_size": 1073741824 00:30:54.042 } 00:30:54.042 ]' 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="ac063c43-427b-4473-b1b8-37585e58a67c") .free_clusters' 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=930 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="ac063c43-427b-4473-b1b8-37585e58a67c") .cluster_size' 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=952320 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 952320 00:30:54.042 952320 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.042 e82d5582-0896-43f5-8124-df8060859ddb 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:54.042 15:49:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:54.042 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:54.042 fio-3.35 00:30:54.042 Starting 1 thread 00:30:54.042 EAL: No free 2048 kB hugepages reported on node 1 00:30:56.570 00:30:56.570 test: (groupid=0, jobs=1): err= 0: pid=1430838: Wed May 15 15:49:09 2024 00:30:56.570 read: IOPS=6064, BW=23.7MiB/s (24.8MB/s)(47.5MiB/2007msec) 00:30:56.570 slat (usec): min=2, max=139, avg= 2.60, stdev= 1.94 00:30:56.570 clat (usec): min=872, max=171255, avg=11600.32, stdev=11608.89 00:30:56.570 lat (usec): min=875, max=171291, avg=11602.92, stdev=11609.13 00:30:56.570 clat percentiles (msec): 00:30:56.570 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:30:56.570 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:30:56.570 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:30:56.570 | 99.00th=[ 14], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:30:56.570 | 99.99th=[ 171] 00:30:56.570 bw ( KiB/s): min=16976, max=26816, per=99.72%, avg=24190.00, stdev=4811.91, samples=4 00:30:56.570 iops : min= 4244, max= 6704, avg=6047.50, stdev=1202.98, samples=4 00:30:56.570 write: IOPS=6042, BW=23.6MiB/s (24.8MB/s)(47.4MiB/2007msec); 0 zone resets 00:30:56.570 slat (usec): min=2, max=101, avg= 2.70, stdev= 1.45 00:30:56.570 clat (usec): min=323, max=169341, avg=9381.05, stdev=10899.32 00:30:56.570 lat (usec): min=326, max=169347, avg=9383.75, stdev=10899.55 00:30:56.570 clat percentiles (msec): 00:30:56.570 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:30:56.570 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:30:56.570 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:30:56.570 | 99.00th=[ 11], 99.50th=[ 17], 99.90th=[ 169], 99.95th=[ 169], 00:30:56.570 | 99.99th=[ 169] 00:30:56.570 bw ( KiB/s): min=18024, max=26304, per=99.93%, avg=24154.00, stdev=4088.11, samples=4 00:30:56.570 iops : min= 4506, max= 6576, avg=6038.50, stdev=1022.03, samples=4 00:30:56.570 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:30:56.570 lat (msec) : 2=0.03%, 4=0.12%, 10=58.19%, 20=41.11%, 250=0.53% 00:30:56.571 cpu : usr=59.37%, sys=37.34%, ctx=77, majf=0, minf=46 00:30:56.571 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:56.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.571 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:56.571 issued rwts: total=12171,12128,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.571 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:56.571 00:30:56.571 Run status group 0 (all jobs): 00:30:56.571 READ: bw=23.7MiB/s (24.8MB/s), 23.7MiB/s-23.7MiB/s (24.8MB/s-24.8MB/s), io=47.5MiB (49.9MB), run=2007-2007msec 00:30:56.571 WRITE: bw=23.6MiB/s (24.8MB/s), 23.6MiB/s-23.6MiB/s (24.8MB/s-24.8MB/s), io=47.4MiB (49.7MB), run=2007-2007msec 00:30:56.571 15:49:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:56.571 15:49:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.571 15:49:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.571 15:49:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.571 15:49:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:56.571 15:49:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.571 15:49:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.504 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.504 15:49:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@62 -- # ls_nested_guid=92ee5ed3-0280-47ca-ab35-6ee228733d53 00:30:57.504 15:49:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@63 -- # get_lvs_free_mb 92ee5ed3-0280-47ca-ab35-6ee228733d53 00:30:57.504 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=92ee5ed3-0280-47ca-ab35-6ee228733d53 00:30:57.504 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:57.504 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:57.504 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:57.504 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # rpc_cmd bdev_lvol_get_lvstores 00:30:57.504 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.504 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.504 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.504 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:57.504 { 00:30:57.504 "uuid": "ac063c43-427b-4473-b1b8-37585e58a67c", 00:30:57.504 "name": "lvs_0", 00:30:57.504 "base_bdev": "Nvme0n1", 00:30:57.504 "total_data_clusters": 930, 00:30:57.504 "free_clusters": 0, 00:30:57.504 "block_size": 512, 00:30:57.504 "cluster_size": 1073741824 00:30:57.504 }, 00:30:57.504 { 00:30:57.504 "uuid": "92ee5ed3-0280-47ca-ab35-6ee228733d53", 00:30:57.504 "name": "lvs_n_0", 00:30:57.504 "base_bdev": "e82d5582-0896-43f5-8124-df8060859ddb", 00:30:57.504 "total_data_clusters": 237847, 00:30:57.504 "free_clusters": 237847, 00:30:57.504 "block_size": 512, 00:30:57.504 "cluster_size": 4194304 00:30:57.504 } 00:30:57.504 ]' 00:30:57.504 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="92ee5ed3-0280-47ca-ab35-6ee228733d53") .free_clusters' 00:30:57.504 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=237847 00:30:57.504 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="92ee5ed3-0280-47ca-ab35-6ee228733d53") .cluster_size' 00:30:57.504 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:30:57.504 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=951388 00:30:57.504 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 951388 00:30:57.504 951388 00:30:57.504 15:49:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:57.504 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.504 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.762 3eb4773d-0c11-4636-99dd-df28089ded45 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:57.762 15:49:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:58.020 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:58.020 fio-3.35 00:30:58.020 Starting 1 thread 00:30:58.020 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.546 00:31:00.546 test: (groupid=0, jobs=1): err= 0: pid=1431424: Wed May 15 15:49:13 2024 00:31:00.546 read: IOPS=5867, BW=22.9MiB/s (24.0MB/s)(46.0MiB/2009msec) 00:31:00.546 slat (usec): min=2, max=143, avg= 2.76, stdev= 2.26 00:31:00.546 clat (usec): min=4340, max=19978, avg=12016.71, stdev=1036.12 00:31:00.546 lat (usec): min=4344, max=19981, avg=12019.47, stdev=1036.02 00:31:00.546 clat percentiles (usec): 00:31:00.546 | 1.00th=[ 9634], 5.00th=[10421], 10.00th=[10683], 20.00th=[11207], 00:31:00.546 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:31:00.546 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13304], 95.00th=[13566], 00:31:00.546 | 99.00th=[14353], 99.50th=[14615], 99.90th=[17433], 99.95th=[18744], 00:31:00.546 | 99.99th=[20055] 00:31:00.546 bw ( KiB/s): min=22376, max=23968, per=99.90%, avg=23448.00, stdev=734.98, samples=4 00:31:00.546 iops : min= 5594, max= 5992, avg=5862.00, stdev=183.75, samples=4 00:31:00.546 write: IOPS=5858, BW=22.9MiB/s (24.0MB/s)(46.0MiB/2009msec); 0 zone resets 00:31:00.546 slat (usec): min=2, max=101, avg= 2.85, stdev= 1.63 00:31:00.546 clat (usec): min=2020, max=18400, avg=9641.39, stdev=905.86 00:31:00.546 lat (usec): min=2025, max=18403, avg=9644.23, stdev=905.82 00:31:00.546 clat percentiles (usec): 00:31:00.546 | 1.00th=[ 7570], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8979], 00:31:00.546 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:31:00.546 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[10945], 00:31:00.546 | 99.00th=[11600], 99.50th=[12125], 99.90th=[15795], 99.95th=[17433], 00:31:00.546 | 99.99th=[18482] 00:31:00.546 bw ( KiB/s): min=23312, max=23488, per=99.91%, avg=23414.00, stdev=88.45, samples=4 00:31:00.546 iops : min= 5828, max= 5872, avg=5853.50, stdev=22.11, samples=4 00:31:00.546 lat (msec) : 4=0.04%, 10=34.69%, 20=65.26% 00:31:00.546 cpu : usr=59.76%, sys=36.95%, ctx=75, majf=0, minf=46 00:31:00.546 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:00.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:00.546 issued rwts: total=11788,11770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.546 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:00.546 00:31:00.546 Run status group 0 (all jobs): 00:31:00.546 READ: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=46.0MiB (48.3MB), run=2009-2009msec 00:31:00.546 WRITE: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=46.0MiB (48.2MB), run=2009-2009msec 00:31:00.546 15:49:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:00.546 15:49:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.546 15:49:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.546 15:49:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.546 15:49:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # sync 00:31:00.546 15:49:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:00.546 15:49:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.546 15:49:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.723 15:49:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.723 15:49:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:31:04.723 15:49:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.723 15:49:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.723 15:49:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.723 15:49:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:31:04.723 15:49:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.723 15:49:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.249 15:49:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.249 15:49:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:31:07.249 15:49:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.249 15:49:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.249 15:49:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.249 15:49:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:31:07.249 15:49:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.249 15:49:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:08.620 rmmod nvme_tcp 00:31:08.620 rmmod nvme_fabrics 00:31:08.620 rmmod nvme_keyring 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1429260 ']' 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1429260 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 1429260 ']' 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 1429260 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1429260 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1429260' 00:31:08.620 killing process with pid 1429260 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 1429260 00:31:08.620 [2024-05-15 15:49:21.454028] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 1429260 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:08.620 15:49:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:08.621 15:49:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.621 15:49:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:08.621 15:49:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.161 15:49:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:11.161 00:31:11.161 real 0m30.959s 00:31:11.161 user 1m50.055s 00:31:11.161 sys 0m6.387s 00:31:11.161 15:49:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:11.161 15:49:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.161 ************************************ 00:31:11.161 END TEST nvmf_fio_host 00:31:11.161 ************************************ 00:31:11.161 15:49:23 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:11.161 15:49:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:11.161 15:49:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:11.161 15:49:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:11.161 ************************************ 00:31:11.161 START TEST nvmf_failover 00:31:11.161 ************************************ 00:31:11.161 15:49:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:11.161 * Looking for test storage... 00:31:11.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:31:11.162 15:49:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:13.695 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:13.695 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:13.695 Found net devices under 0000:09:00.0: cvl_0_0 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:13.695 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:13.696 Found net devices under 0000:09:00.1: cvl_0_1 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:13.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:13.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:31:13.696 00:31:13.696 --- 10.0.0.2 ping statistics --- 00:31:13.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.696 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:13.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:13.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:31:13.696 00:31:13.696 --- 10.0.0.1 ping statistics --- 00:31:13.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.696 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1434818 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1434818 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1434818 ']' 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:13.696 [2024-05-15 15:49:26.485449] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:31:13.696 [2024-05-15 15:49:26.485550] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:13.696 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.696 [2024-05-15 15:49:26.529753] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:13.696 [2024-05-15 15:49:26.560760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:13.696 [2024-05-15 15:49:26.645740] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:13.696 [2024-05-15 15:49:26.645801] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:13.696 [2024-05-15 15:49:26.645830] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:13.696 [2024-05-15 15:49:26.645841] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:13.696 [2024-05-15 15:49:26.645850] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:13.696 [2024-05-15 15:49:26.645938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:13.696 [2024-05-15 15:49:26.646004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:13.696 [2024-05-15 15:49:26.646008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:13.696 15:49:26 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:13.954 [2024-05-15 15:49:27.055096] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.214 15:49:27 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:14.472 Malloc0 00:31:14.472 15:49:27 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:14.730 15:49:27 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:14.987 15:49:27 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:14.987 [2024-05-15 15:49:28.063102] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:14.987 [2024-05-15 15:49:28.063387] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.987 15:49:28 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:15.245 [2024-05-15 15:49:28.320076] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:15.245 15:49:28 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:15.502 [2024-05-15 15:49:28.560888] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:15.502 15:49:28 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1435104 00:31:15.502 15:49:28 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:15.502 15:49:28 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:15.502 15:49:28 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1435104 /var/tmp/bdevperf.sock 00:31:15.502 15:49:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1435104 ']' 00:31:15.502 15:49:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:15.502 15:49:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:15.502 15:49:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:15.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:15.502 15:49:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:15.502 15:49:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:16.068 15:49:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:16.068 15:49:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:31:16.068 15:49:28 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:16.326 NVMe0n1 00:31:16.326 15:49:29 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:16.584 00:31:16.584 15:49:29 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1435239 00:31:16.584 15:49:29 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:16.584 15:49:29 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:17.517 15:49:30 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:17.775 [2024-05-15 15:49:30.762435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762683] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 [2024-05-15 15:49:30.762888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cee50 is same with the state(5) to be set 00:31:17.775 15:49:30 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:21.053 15:49:33 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:21.053 00:31:21.053 15:49:34 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:21.312 [2024-05-15 15:49:34.352812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.312 [2024-05-15 15:49:34.352876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.312 [2024-05-15 15:49:34.352892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.312 [2024-05-15 15:49:34.352905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.312 [2024-05-15 15:49:34.352918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.312 [2024-05-15 15:49:34.352930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.312 [2024-05-15 15:49:34.352942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.312 [2024-05-15 15:49:34.352954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.312 [2024-05-15 15:49:34.352966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.312 [2024-05-15 15:49:34.352978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.312 [2024-05-15 15:49:34.352989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.312 [2024-05-15 15:49:34.353001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.312 [2024-05-15 15:49:34.353013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.312 [2024-05-15 15:49:34.353025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.312 [2024-05-15 15:49:34.353037] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.312 [2024-05-15 15:49:34.353061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.312 [2024-05-15 15:49:34.353074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.312 [2024-05-15 15:49:34.353086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353286] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353421] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353446] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353567] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353705] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 [2024-05-15 15:49:34.353744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf680 is same with the state(5) to be set 00:31:21.313 15:49:34 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:24.634 15:49:37 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:24.634 [2024-05-15 15:49:37.597903] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:24.634 15:49:37 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:25.568 15:49:38 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:25.826 [2024-05-15 15:49:38.891978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16751d0 is same with the state(5) to be set 00:31:25.826 15:49:38 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1435239 00:31:32.399 0 00:31:32.399 15:49:44 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1435104 00:31:32.399 15:49:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1435104 ']' 00:31:32.399 15:49:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1435104 00:31:32.399 15:49:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:32.399 15:49:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:32.399 15:49:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1435104 00:31:32.399 15:49:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:32.399 15:49:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:32.399 15:49:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1435104' 00:31:32.399 killing process with pid 1435104 00:31:32.399 15:49:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1435104 00:31:32.399 15:49:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1435104 00:31:32.399 15:49:44 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:32.399 [2024-05-15 15:49:28.621121] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:31:32.399 [2024-05-15 15:49:28.621197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1435104 ] 00:31:32.399 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.399 [2024-05-15 15:49:28.657066] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:32.399 [2024-05-15 15:49:28.690391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.399 [2024-05-15 15:49:28.774341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.399 Running I/O for 15 seconds... 00:31:32.399 [2024-05-15 15:49:30.764026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.399 [2024-05-15 15:49:30.764867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.399 [2024-05-15 15:49:30.764883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.400 [2024-05-15 15:49:30.764896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.764912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.400 [2024-05-15 15:49:30.764926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.764941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.764954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.764969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.764983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.764998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.400 [2024-05-15 15:49:30.765663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.400 [2024-05-15 15:49:30.765692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.400 [2024-05-15 15:49:30.765720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.400 [2024-05-15 15:49:30.765749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.400 [2024-05-15 15:49:30.765777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.400 [2024-05-15 15:49:30.765806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.400 [2024-05-15 15:49:30.765835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.765982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.765997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.766011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.766026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.766039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.766054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.400 [2024-05-15 15:49:30.766067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.400 [2024-05-15 15:49:30.766082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.401 [2024-05-15 15:49:30.766096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.401 [2024-05-15 15:49:30.766124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.401 [2024-05-15 15:49:30.766153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.401 [2024-05-15 15:49:30.766181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.401 [2024-05-15 15:49:30.766209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.401 [2024-05-15 15:49:30.766250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.401 [2024-05-15 15:49:30.766280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.401 [2024-05-15 15:49:30.766310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.401 [2024-05-15 15:49:30.766338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.401 [2024-05-15 15:49:30.766366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.401 [2024-05-15 15:49:30.766395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.401 [2024-05-15 15:49:30.766424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.401 [2024-05-15 15:49:30.766452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.401 [2024-05-15 15:49:30.766481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.401 [2024-05-15 15:49:30.766509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.401 [2024-05-15 15:49:30.766538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.401 [2024-05-15 15:49:30.766566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.401 [2024-05-15 15:49:30.766594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.401 [2024-05-15 15:49:30.766627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.401 [2024-05-15 15:49:30.766656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.401 [2024-05-15 15:49:30.766704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77840 len:8 PRP1 0x0 PRP2 0x0 00:31:32.401 [2024-05-15 15:49:30.766717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.401 [2024-05-15 15:49:30.766748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.401 [2024-05-15 15:49:30.766760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77848 len:8 PRP1 0x0 PRP2 0x0 00:31:32.401 [2024-05-15 15:49:30.766773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.401 [2024-05-15 15:49:30.766798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.401 [2024-05-15 15:49:30.766809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77856 len:8 PRP1 0x0 PRP2 0x0 00:31:32.401 [2024-05-15 15:49:30.766822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.401 [2024-05-15 15:49:30.766846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.401 [2024-05-15 15:49:30.766857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77864 len:8 PRP1 0x0 PRP2 0x0 00:31:32.401 [2024-05-15 15:49:30.766870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.401 [2024-05-15 15:49:30.766894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.401 [2024-05-15 15:49:30.766906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77872 len:8 PRP1 0x0 PRP2 0x0 00:31:32.401 [2024-05-15 15:49:30.766918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.401 [2024-05-15 15:49:30.766943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.401 [2024-05-15 15:49:30.766954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77880 len:8 PRP1 0x0 PRP2 0x0 00:31:32.401 [2024-05-15 15:49:30.766967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.766980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.401 [2024-05-15 15:49:30.766991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.401 [2024-05-15 15:49:30.767002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77888 len:8 PRP1 0x0 PRP2 0x0 00:31:32.401 [2024-05-15 15:49:30.767019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.767033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.401 [2024-05-15 15:49:30.767043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.401 [2024-05-15 15:49:30.767055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77896 len:8 PRP1 0x0 PRP2 0x0 00:31:32.401 [2024-05-15 15:49:30.767068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.767081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.401 [2024-05-15 15:49:30.767092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.401 [2024-05-15 15:49:30.767103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77904 len:8 PRP1 0x0 PRP2 0x0 00:31:32.401 [2024-05-15 15:49:30.767116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.767129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.401 [2024-05-15 15:49:30.767140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.401 [2024-05-15 15:49:30.767152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77912 len:8 PRP1 0x0 PRP2 0x0 00:31:32.401 [2024-05-15 15:49:30.767166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.767178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.401 [2024-05-15 15:49:30.767189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.401 [2024-05-15 15:49:30.767201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77920 len:8 PRP1 0x0 PRP2 0x0 00:31:32.401 [2024-05-15 15:49:30.767213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.767233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.401 [2024-05-15 15:49:30.767245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.401 [2024-05-15 15:49:30.767256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77928 len:8 PRP1 0x0 PRP2 0x0 00:31:32.401 [2024-05-15 15:49:30.767269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.401 [2024-05-15 15:49:30.767282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.401 [2024-05-15 15:49:30.767292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.401 [2024-05-15 15:49:30.767303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77936 len:8 PRP1 0x0 PRP2 0x0 00:31:32.401 [2024-05-15 15:49:30.767316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.402 [2024-05-15 15:49:30.767328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.402 [2024-05-15 15:49:30.767339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.402 [2024-05-15 15:49:30.767350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77944 len:8 PRP1 0x0 PRP2 0x0 00:31:32.402 [2024-05-15 15:49:30.767363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.402 [2024-05-15 15:49:30.767375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.402 [2024-05-15 15:49:30.767386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.402 [2024-05-15 15:49:30.767401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77952 len:8 PRP1 0x0 PRP2 0x0 00:31:32.402 [2024-05-15 15:49:30.767414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.402 [2024-05-15 15:49:30.767427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.402 [2024-05-15 15:49:30.767438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.402 [2024-05-15 15:49:30.767450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77960 len:8 PRP1 0x0 PRP2 0x0 00:31:32.402 [2024-05-15 15:49:30.767462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.402 [2024-05-15 15:49:30.767475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.402 [2024-05-15 15:49:30.767486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.402 [2024-05-15 15:49:30.767497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77968 len:8 PRP1 0x0 PRP2 0x0 00:31:32.402 [2024-05-15 15:49:30.767510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.402 [2024-05-15 15:49:30.767523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.402 [2024-05-15 15:49:30.767533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.402 [2024-05-15 15:49:30.767545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77976 len:8 PRP1 0x0 PRP2 0x0 00:31:32.402 [2024-05-15 15:49:30.767558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.402 [2024-05-15 15:49:30.767571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.402 [2024-05-15 15:49:30.767582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.402 [2024-05-15 15:49:30.767593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77984 len:8 PRP1 0x0 PRP2 0x0 00:31:32.402 [2024-05-15 15:49:30.767606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.402 [2024-05-15 15:49:30.767618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.402 [2024-05-15 15:49:30.767629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.402 [2024-05-15 15:49:30.767640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77992 len:8 PRP1 0x0 PRP2 0x0 00:31:32.402 [2024-05-15 15:49:30.767653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.402 [2024-05-15 15:49:30.767666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.402 [2024-05-15 15:49:30.767677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.402 [2024-05-15 15:49:30.767688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78000 len:8 PRP1 0x0 PRP2 0x0 00:31:32.402 [2024-05-15 15:49:30.767701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.402 [2024-05-15 15:49:30.767714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.402 [2024-05-15 15:49:30.767725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.402 [2024-05-15 15:49:30.767736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78008 len:8 PRP1 0x0 PRP2 0x0 00:31:32.402 [2024-05-15 15:49:30.767748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.402 [2024-05-15 15:49:30.767761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.402 [2024-05-15 15:49:30.767775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.402 [2024-05-15 15:49:30.767787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78016 len:8 PRP1 0x0 PRP2 0x0 00:31:32.402 [2024-05-15 15:49:30.767800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.402 [2024-05-15 15:49:30.767814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.402 [2024-05-15 15:49:30.767825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.402 [2024-05-15 15:49:30.767836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78024 len:8 PRP1 0x0 PRP2 0x0 00:31:32.402 [2024-05-15 15:49:30.767849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.402 [2024-05-15 15:49:30.767862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.402 [2024-05-15 15:49:30.767873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.402 [2024-05-15 15:49:30.767884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78032 len:8 PRP1 0x0 PRP2 0x0 00:31:32.402 [2024-05-15 15:49:30.767897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.402 [2024-05-15 15:49:30.767910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.402 [2024-05-15 15:49:30.767921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.402 [2024-05-15 15:49:30.767932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78040 len:8 PRP1 0x0 PRP2 0x0 00:31:32.402 [2024-05-15 15:49:30.767945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.402 [2024-05-15 15:49:30.767958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.402 [2024-05-15 15:49:30.767969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.402 [2024-05-15 15:49:30.767980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78048 len:8 PRP1 0x0 PRP2 0x0 00:31:32.402 [2024-05-15 15:49:30.767993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.402 [2024-05-15 15:49:30.768006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.402 [2024-05-15 15:49:30.768017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.402 [2024-05-15 15:49:30.768028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78056 len:8 PRP1 0x0 PRP2 0x0 00:31:32.402 [2024-05-15 15:49:30.768042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.402 [2024-05-15 15:49:30.768055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.402 [2024-05-15 15:49:30.768065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.402 [2024-05-15 15:49:30.768076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78064 len:8 PRP1 0x0 PRP2 0x0 00:31:32.402 [2024-05-15 15:49:30.768089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.402 [2024-05-15 15:49:30.768102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.402 [2024-05-15 15:49:30.768113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.402 [2024-05-15 15:49:30.768124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78072 len:8 PRP1 0x0 PRP2 0x0 00:31:32.402 [2024-05-15 15:49:30.768137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.402 [2024-05-15 15:49:30.768154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.402 [2024-05-15 15:49:30.768165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.402 [2024-05-15 15:49:30.768176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78080 len:8 PRP1 0x0 PRP2 0x0 00:31:32.402 [2024-05-15 15:49:30.768189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.402 [2024-05-15 15:49:30.768202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.402 [2024-05-15 15:49:30.768213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.402 [2024-05-15 15:49:30.768231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78088 len:8 PRP1 0x0 PRP2 0x0 00:31:32.402 [2024-05-15 15:49:30.768244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.402 [2024-05-15 15:49:30.768258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.402 [2024-05-15 15:49:30.768269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.402 [2024-05-15 15:49:30.768280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77360 len:8 PRP1 0x0 PRP2 0x0 00:31:32.402 [2024-05-15 15:49:30.768293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.402 [2024-05-15 15:49:30.768306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.402 [2024-05-15 15:49:30.768317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.402 [2024-05-15 15:49:30.768328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77368 len:8 PRP1 0x0 PRP2 0x0 00:31:32.402 [2024-05-15 15:49:30.768341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.402 [2024-05-15 15:49:30.768354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.402 [2024-05-15 15:49:30.768364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.402 [2024-05-15 15:49:30.768375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77376 len:8 PRP1 0x0 PRP2 0x0 00:31:32.402 [2024-05-15 15:49:30.768388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.402 [2024-05-15 15:49:30.768401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.402 [2024-05-15 15:49:30.768411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.402 [2024-05-15 15:49:30.768422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77384 len:8 PRP1 0x0 PRP2 0x0 00:31:32.402 [2024-05-15 15:49:30.768435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.402 [2024-05-15 15:49:30.768447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.402 [2024-05-15 15:49:30.768458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.402 [2024-05-15 15:49:30.768469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77392 len:8 PRP1 0x0 PRP2 0x0 00:31:32.402 [2024-05-15 15:49:30.768482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:30.768494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.403 [2024-05-15 15:49:30.768505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.403 [2024-05-15 15:49:30.768516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77400 len:8 PRP1 0x0 PRP2 0x0 00:31:32.403 [2024-05-15 15:49:30.768533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:30.768546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.403 [2024-05-15 15:49:30.768557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.403 [2024-05-15 15:49:30.768568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77408 len:8 PRP1 0x0 PRP2 0x0 00:31:32.403 [2024-05-15 15:49:30.768581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:30.768593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.403 [2024-05-15 15:49:30.768604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.403 [2024-05-15 15:49:30.768615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77416 len:8 PRP1 0x0 PRP2 0x0 00:31:32.403 [2024-05-15 15:49:30.768628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:30.768693] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b87320 was disconnected and freed. reset controller. 00:31:32.403 [2024-05-15 15:49:30.768718] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:32.403 [2024-05-15 15:49:30.768753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.403 [2024-05-15 15:49:30.768771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:30.768786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.403 [2024-05-15 15:49:30.768806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:30.768821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.403 [2024-05-15 15:49:30.768834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:30.768848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.403 [2024-05-15 15:49:30.768861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:30.768874] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:32.403 [2024-05-15 15:49:30.768928] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b66240 (9): Bad file descriptor 00:31:32.403 [2024-05-15 15:49:30.772171] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:32.403 [2024-05-15 15:49:30.962912] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:32.403 [2024-05-15 15:49:34.354822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.403 [2024-05-15 15:49:34.354868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:34.354894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.403 [2024-05-15 15:49:34.354909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:34.354925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.403 [2024-05-15 15:49:34.354944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:34.354960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.403 [2024-05-15 15:49:34.354974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:34.354989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.403 [2024-05-15 15:49:34.355003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:34.355017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.403 [2024-05-15 15:49:34.355030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:34.355045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.403 [2024-05-15 15:49:34.355058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:34.355073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.403 [2024-05-15 15:49:34.355086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:34.355100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.403 [2024-05-15 15:49:34.355114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:34.355128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.403 [2024-05-15 15:49:34.355141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:34.355156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.403 [2024-05-15 15:49:34.355169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:34.355184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.403 [2024-05-15 15:49:34.355213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:34.355242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.403 [2024-05-15 15:49:34.355257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:34.355272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.403 [2024-05-15 15:49:34.355286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:34.355301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.403 [2024-05-15 15:49:34.355315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:34.355335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.403 [2024-05-15 15:49:34.355349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:34.355364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.403 [2024-05-15 15:49:34.355379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:34.355395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.403 [2024-05-15 15:49:34.355409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:34.355424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.403 [2024-05-15 15:49:34.355438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:34.355453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.403 [2024-05-15 15:49:34.355466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:34.355481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.403 [2024-05-15 15:49:34.355495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.403 [2024-05-15 15:49:34.355526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.355539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.355554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.355567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.355582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.355595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.355609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.355623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.355637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.355650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.355665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.355678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.355693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.355710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.355725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.355738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.355753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.355766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.355780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.355793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.355808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.355821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.355835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.355849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.355863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.355877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.355891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.355904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.355919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.355932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.355947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.355960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.355974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.355988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.356015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.356043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.356076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.356103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.356131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.356159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.356186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.356239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.356269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.356297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.356335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.356363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.356392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.356421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.356453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.356482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.356525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.356563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.356600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.356629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.356657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.356685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.356712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.404 [2024-05-15 15:49:34.356741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.404 [2024-05-15 15:49:34.356755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.405 [2024-05-15 15:49:34.356768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.356783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.405 [2024-05-15 15:49:34.356797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.356818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.405 [2024-05-15 15:49:34.356832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.356850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.405 [2024-05-15 15:49:34.356864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.356879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.405 [2024-05-15 15:49:34.356892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.356906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.405 [2024-05-15 15:49:34.356920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.356934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.405 [2024-05-15 15:49:34.356947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.356962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.405 [2024-05-15 15:49:34.356975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.356989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.405 [2024-05-15 15:49:34.357002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.405 [2024-05-15 15:49:34.357030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.405 [2024-05-15 15:49:34.357058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.405 [2024-05-15 15:49:34.357086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.405 [2024-05-15 15:49:34.357114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.405 [2024-05-15 15:49:34.357142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.405 [2024-05-15 15:49:34.357169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.405 [2024-05-15 15:49:34.357212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.405 [2024-05-15 15:49:34.357253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.405 [2024-05-15 15:49:34.357282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.405 [2024-05-15 15:49:34.357317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.405 [2024-05-15 15:49:34.357346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.405 [2024-05-15 15:49:34.357375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.405 [2024-05-15 15:49:34.357403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.405 [2024-05-15 15:49:34.357432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.405 [2024-05-15 15:49:34.357460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.405 [2024-05-15 15:49:34.357488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.405 [2024-05-15 15:49:34.357517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.405 [2024-05-15 15:49:34.357559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.405 [2024-05-15 15:49:34.357587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.405 [2024-05-15 15:49:34.357618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.405 [2024-05-15 15:49:34.357646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.405 [2024-05-15 15:49:34.357673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.405 [2024-05-15 15:49:34.357720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111320 len:8 PRP1 0x0 PRP2 0x0 00:31:32.405 [2024-05-15 15:49:34.357733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.405 [2024-05-15 15:49:34.357953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.405 [2024-05-15 15:49:34.357965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111328 len:8 PRP1 0x0 PRP2 0x0 00:31:32.405 [2024-05-15 15:49:34.357979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.357995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.405 [2024-05-15 15:49:34.358007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.405 [2024-05-15 15:49:34.358018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111336 len:8 PRP1 0x0 PRP2 0x0 00:31:32.405 [2024-05-15 15:49:34.358030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.358043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.405 [2024-05-15 15:49:34.358054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.405 [2024-05-15 15:49:34.358064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111344 len:8 PRP1 0x0 PRP2 0x0 00:31:32.405 [2024-05-15 15:49:34.358077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.358106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.405 [2024-05-15 15:49:34.358117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.405 [2024-05-15 15:49:34.358129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111352 len:8 PRP1 0x0 PRP2 0x0 00:31:32.405 [2024-05-15 15:49:34.358141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.358154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.405 [2024-05-15 15:49:34.358165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.405 [2024-05-15 15:49:34.358176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111360 len:8 PRP1 0x0 PRP2 0x0 00:31:32.405 [2024-05-15 15:49:34.358188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.405 [2024-05-15 15:49:34.358201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.405 [2024-05-15 15:49:34.358222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.405 [2024-05-15 15:49:34.358236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111368 len:8 PRP1 0x0 PRP2 0x0 00:31:32.406 [2024-05-15 15:49:34.358249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.406 [2024-05-15 15:49:34.358262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.406 [2024-05-15 15:49:34.358273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.406 [2024-05-15 15:49:34.358285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111376 len:8 PRP1 0x0 PRP2 0x0 00:31:32.406 [2024-05-15 15:49:34.358297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.406 [2024-05-15 15:49:34.358310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.406 [2024-05-15 15:49:34.358320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.406 [2024-05-15 15:49:34.358332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111384 len:8 PRP1 0x0 PRP2 0x0 00:31:32.406 [2024-05-15 15:49:34.358344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.406 [2024-05-15 15:49:34.358357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.406 [2024-05-15 15:49:34.358368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.406 [2024-05-15 15:49:34.358379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111392 len:8 PRP1 0x0 PRP2 0x0 00:31:32.406 [2024-05-15 15:49:34.358392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.406 [2024-05-15 15:49:34.358404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.406 [2024-05-15 15:49:34.358415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.406 [2024-05-15 15:49:34.358426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111400 len:8 PRP1 0x0 PRP2 0x0 00:31:32.406 [2024-05-15 15:49:34.358438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.406 [2024-05-15 15:49:34.358451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.406 [2024-05-15 15:49:34.358462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.406 [2024-05-15 15:49:34.358473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111408 len:8 PRP1 0x0 PRP2 0x0 00:31:32.406 [2024-05-15 15:49:34.358485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.406 [2024-05-15 15:49:34.358497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.406 [2024-05-15 15:49:34.358508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.406 [2024-05-15 15:49:34.358519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111416 len:8 PRP1 0x0 PRP2 0x0 00:31:32.406 [2024-05-15 15:49:34.358532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.406 [2024-05-15 15:49:34.358544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.406 [2024-05-15 15:49:34.358555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.406 [2024-05-15 15:49:34.358566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111424 len:8 PRP1 0x0 PRP2 0x0 00:31:32.406 [2024-05-15 15:49:34.358578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.406 [2024-05-15 15:49:34.358595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.406 [2024-05-15 15:49:34.358606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.406 [2024-05-15 15:49:34.358617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111432 len:8 PRP1 0x0 PRP2 0x0 00:31:32.406 [2024-05-15 15:49:34.358630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.406 [2024-05-15 15:49:34.358643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.406 [2024-05-15 15:49:34.358653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.406 [2024-05-15 15:49:34.358665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111440 len:8 PRP1 0x0 PRP2 0x0 00:31:32.406 [2024-05-15 15:49:34.358678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.406 [2024-05-15 15:49:34.358691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.406 [2024-05-15 15:49:34.358702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.406 [2024-05-15 15:49:34.358713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111448 len:8 PRP1 0x0 PRP2 0x0 00:31:32.406 [2024-05-15 15:49:34.358725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.406 [2024-05-15 15:49:34.358738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.406 [2024-05-15 15:49:34.358749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.406 [2024-05-15 15:49:34.358761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111456 len:8 PRP1 0x0 PRP2 0x0 00:31:32.406 [2024-05-15 15:49:34.358773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.406 [2024-05-15 15:49:34.358787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.406 [2024-05-15 15:49:34.358798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.406 [2024-05-15 15:49:34.358809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111464 len:8 PRP1 0x0 PRP2 0x0 00:31:32.406 [2024-05-15 15:49:34.358821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.406 [2024-05-15 15:49:34.358835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.406 [2024-05-15 15:49:34.358845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.406 [2024-05-15 15:49:34.358858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110640 len:8 PRP1 0x0 PRP2 0x0 00:31:32.406 [2024-05-15 15:49:34.358870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.406 [2024-05-15 15:49:34.358883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.406 [2024-05-15 15:49:34.358894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.406 [2024-05-15 15:49:34.358905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110648 len:8 PRP1 0x0 PRP2 0x0 00:31:32.406 [2024-05-15 15:49:34.358918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.406 [2024-05-15 15:49:34.358931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.406 [2024-05-15 15:49:34.358942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.406 [2024-05-15 15:49:34.358953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110656 len:8 PRP1 0x0 PRP2 0x0 00:31:32.406 [2024-05-15 15:49:34.358970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.406 [2024-05-15 15:49:34.358983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.406 [2024-05-15 15:49:34.358994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.406 [2024-05-15 15:49:34.359006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110664 len:8 PRP1 0x0 PRP2 0x0 00:31:32.406 [2024-05-15 15:49:34.359019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.406 [2024-05-15 15:49:34.359031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.406 [2024-05-15 15:49:34.359042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.406 [2024-05-15 15:49:34.359054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110672 len:8 PRP1 0x0 PRP2 0x0 00:31:32.406 [2024-05-15 15:49:34.359066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.406 [2024-05-15 15:49:34.359079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.406 [2024-05-15 15:49:34.359090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.406 [2024-05-15 15:49:34.359101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110680 len:8 PRP1 0x0 PRP2 0x0 00:31:32.406 [2024-05-15 15:49:34.359114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.406 [2024-05-15 15:49:34.359127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.406 [2024-05-15 15:49:34.359138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.406 [2024-05-15 15:49:34.359149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110688 len:8 PRP1 0x0 PRP2 0x0 00:31:32.406 [2024-05-15 15:49:34.359162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.406 [2024-05-15 15:49:34.359175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.406 [2024-05-15 15:49:34.359186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.406 [2024-05-15 15:49:34.359197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110696 len:8 PRP1 0x0 PRP2 0x0 00:31:32.406 [2024-05-15 15:49:34.359210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.406 [2024-05-15 15:49:34.359232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.406 [2024-05-15 15:49:34.359244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.406 [2024-05-15 15:49:34.359256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110704 len:8 PRP1 0x0 PRP2 0x0 00:31:32.406 [2024-05-15 15:49:34.359269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.406 [2024-05-15 15:49:34.359282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.406 [2024-05-15 15:49:34.359293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.406 [2024-05-15 15:49:34.359304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110712 len:8 PRP1 0x0 PRP2 0x0 00:31:32.406 [2024-05-15 15:49:34.359316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.406 [2024-05-15 15:49:34.359329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.406 [2024-05-15 15:49:34.359340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.406 [2024-05-15 15:49:34.359359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110720 len:8 PRP1 0x0 PRP2 0x0 00:31:32.406 [2024-05-15 15:49:34.359372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.406 [2024-05-15 15:49:34.359385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.406 [2024-05-15 15:49:34.359396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.407 [2024-05-15 15:49:34.359408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110728 len:8 PRP1 0x0 PRP2 0x0 00:31:32.407 [2024-05-15 15:49:34.359421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.407 [2024-05-15 15:49:34.359434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.407 [2024-05-15 15:49:34.359444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.407 [2024-05-15 15:49:34.359456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110736 len:8 PRP1 0x0 PRP2 0x0 00:31:32.407 [2024-05-15 15:49:34.359468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.407 [2024-05-15 15:49:34.359481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.407 [2024-05-15 15:49:34.359492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.407 [2024-05-15 15:49:34.359503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110744 len:8 PRP1 0x0 PRP2 0x0 00:31:32.407 [2024-05-15 15:49:34.359516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.407 [2024-05-15 15:49:34.359529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.407 [2024-05-15 15:49:34.359539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.407 [2024-05-15 15:49:34.359551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110752 len:8 PRP1 0x0 PRP2 0x0 00:31:32.407 [2024-05-15 15:49:34.359570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.407 [2024-05-15 15:49:34.359583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.407 [2024-05-15 15:49:34.359594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.407 [2024-05-15 15:49:34.359606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110760 len:8 PRP1 0x0 PRP2 0x0 00:31:32.407 [2024-05-15 15:49:34.359619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.407 [2024-05-15 15:49:34.359632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.407 [2024-05-15 15:49:34.359643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.407 [2024-05-15 15:49:34.359654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110448 len:8 PRP1 0x0 PRP2 0x0 00:31:32.407 [2024-05-15 15:49:34.359667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.407 [2024-05-15 15:49:34.359680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.407 [2024-05-15 15:49:34.359691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.407 [2024-05-15 15:49:34.359702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110456 len:8 PRP1 0x0 PRP2 0x0 00:31:32.407 [2024-05-15 15:49:34.359714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.407 [2024-05-15 15:49:34.359731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.407 [2024-05-15 15:49:34.359742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.407 [2024-05-15 15:49:34.359754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110464 len:8 PRP1 0x0 PRP2 0x0 00:31:32.407 [2024-05-15 15:49:34.359767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.407 [2024-05-15 15:49:34.359780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.407 [2024-05-15 15:49:34.359791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.407 [2024-05-15 15:49:34.359802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110472 len:8 PRP1 0x0 PRP2 0x0 00:31:32.407 [2024-05-15 15:49:34.359816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.407 [2024-05-15 15:49:34.359829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.407 [2024-05-15 15:49:34.359839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.407 [2024-05-15 15:49:34.359850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110480 len:8 PRP1 0x0 PRP2 0x0 00:31:32.407 [2024-05-15 15:49:34.359863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.407 [2024-05-15 15:49:34.359877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.407 [2024-05-15 15:49:34.359887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.407 [2024-05-15 15:49:34.359899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110488 len:8 PRP1 0x0 PRP2 0x0 00:31:32.407 [2024-05-15 15:49:34.359911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.407 [2024-05-15 15:49:34.359924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.407 [2024-05-15 15:49:34.359935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.407 [2024-05-15 15:49:34.359947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110496 len:8 PRP1 0x0 PRP2 0x0 00:31:32.407 [2024-05-15 15:49:34.359966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.407 [2024-05-15 15:49:34.359979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.407 [2024-05-15 15:49:34.359990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.407 [2024-05-15 15:49:34.360002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110504 len:8 PRP1 0x0 PRP2 0x0 00:31:32.407 [2024-05-15 15:49:34.360015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.407 [2024-05-15 15:49:34.360028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.407 [2024-05-15 15:49:34.360039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.407 [2024-05-15 15:49:34.360050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110512 len:8 PRP1 0x0 PRP2 0x0 00:31:32.407 [2024-05-15 15:49:34.360062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.407 [2024-05-15 15:49:34.360076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.407 [2024-05-15 15:49:34.360087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.407 [2024-05-15 15:49:34.360098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110520 len:8 PRP1 0x0 PRP2 0x0 00:31:32.407 [2024-05-15 15:49:34.360115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.407 [2024-05-15 15:49:34.360129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.407 [2024-05-15 15:49:34.360140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.407 [2024-05-15 15:49:34.360151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110528 len:8 PRP1 0x0 PRP2 0x0 00:31:32.407 [2024-05-15 15:49:34.360164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.407 [2024-05-15 15:49:34.360177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.407 [2024-05-15 15:49:34.360188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.407 [2024-05-15 15:49:34.360199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110536 len:8 PRP1 0x0 PRP2 0x0 00:31:32.407 [2024-05-15 15:49:34.360212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.407 [2024-05-15 15:49:34.360233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.407 [2024-05-15 15:49:34.360244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.407 [2024-05-15 15:49:34.360256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110544 len:8 PRP1 0x0 PRP2 0x0 00:31:32.407 [2024-05-15 15:49:34.360269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.407 [2024-05-15 15:49:34.360282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.407 [2024-05-15 15:49:34.360293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.407 [2024-05-15 15:49:34.360304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110552 len:8 PRP1 0x0 PRP2 0x0 00:31:32.407 [2024-05-15 15:49:34.360317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.407 [2024-05-15 15:49:34.360330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.407 [2024-05-15 15:49:34.360341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.407 [2024-05-15 15:49:34.360353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110560 len:8 PRP1 0x0 PRP2 0x0 00:31:32.407 [2024-05-15 15:49:34.360371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.407 [2024-05-15 15:49:34.360385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.407 [2024-05-15 15:49:34.360396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.407 [2024-05-15 15:49:34.360408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110568 len:8 PRP1 0x0 PRP2 0x0 00:31:32.407 [2024-05-15 15:49:34.360421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.360434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.408 [2024-05-15 15:49:34.360445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.408 [2024-05-15 15:49:34.360457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110576 len:8 PRP1 0x0 PRP2 0x0 00:31:32.408 [2024-05-15 15:49:34.360470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.360483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.408 [2024-05-15 15:49:34.360494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.408 [2024-05-15 15:49:34.360509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110768 len:8 PRP1 0x0 PRP2 0x0 00:31:32.408 [2024-05-15 15:49:34.360522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.360536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.408 [2024-05-15 15:49:34.360547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.408 [2024-05-15 15:49:34.360558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110776 len:8 PRP1 0x0 PRP2 0x0 00:31:32.408 [2024-05-15 15:49:34.360571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.360584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.408 [2024-05-15 15:49:34.360595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.408 [2024-05-15 15:49:34.360606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110784 len:8 PRP1 0x0 PRP2 0x0 00:31:32.408 [2024-05-15 15:49:34.360619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.360632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.408 [2024-05-15 15:49:34.360643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.408 [2024-05-15 15:49:34.360654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110792 len:8 PRP1 0x0 PRP2 0x0 00:31:32.408 [2024-05-15 15:49:34.360667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.360680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.408 [2024-05-15 15:49:34.360691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.408 [2024-05-15 15:49:34.360702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110800 len:8 PRP1 0x0 PRP2 0x0 00:31:32.408 [2024-05-15 15:49:34.360715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.360728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.408 [2024-05-15 15:49:34.360739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.408 [2024-05-15 15:49:34.360750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110808 len:8 PRP1 0x0 PRP2 0x0 00:31:32.408 [2024-05-15 15:49:34.360768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.360782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.408 [2024-05-15 15:49:34.360793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.408 [2024-05-15 15:49:34.360804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110816 len:8 PRP1 0x0 PRP2 0x0 00:31:32.408 [2024-05-15 15:49:34.360817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.360830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.408 [2024-05-15 15:49:34.360840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.408 [2024-05-15 15:49:34.360851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110824 len:8 PRP1 0x0 PRP2 0x0 00:31:32.408 [2024-05-15 15:49:34.360864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.360877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.408 [2024-05-15 15:49:34.360891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.408 [2024-05-15 15:49:34.360903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110832 len:8 PRP1 0x0 PRP2 0x0 00:31:32.408 [2024-05-15 15:49:34.360916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.360930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.408 [2024-05-15 15:49:34.360941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.408 [2024-05-15 15:49:34.360952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110840 len:8 PRP1 0x0 PRP2 0x0 00:31:32.408 [2024-05-15 15:49:34.360965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.360978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.408 [2024-05-15 15:49:34.360989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.408 [2024-05-15 15:49:34.361000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110848 len:8 PRP1 0x0 PRP2 0x0 00:31:32.408 [2024-05-15 15:49:34.361013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.361026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.408 [2024-05-15 15:49:34.361037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.408 [2024-05-15 15:49:34.361048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110856 len:8 PRP1 0x0 PRP2 0x0 00:31:32.408 [2024-05-15 15:49:34.361061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.361074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.408 [2024-05-15 15:49:34.361085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.408 [2024-05-15 15:49:34.361096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110864 len:8 PRP1 0x0 PRP2 0x0 00:31:32.408 [2024-05-15 15:49:34.361109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.361122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.408 [2024-05-15 15:49:34.361132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.408 [2024-05-15 15:49:34.361143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110872 len:8 PRP1 0x0 PRP2 0x0 00:31:32.408 [2024-05-15 15:49:34.361162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.361175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.408 [2024-05-15 15:49:34.361186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.408 [2024-05-15 15:49:34.361197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110880 len:8 PRP1 0x0 PRP2 0x0 00:31:32.408 [2024-05-15 15:49:34.361209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.361229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.408 [2024-05-15 15:49:34.361240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.408 [2024-05-15 15:49:34.361252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110888 len:8 PRP1 0x0 PRP2 0x0 00:31:32.408 [2024-05-15 15:49:34.361264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.361281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.408 [2024-05-15 15:49:34.361292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.408 [2024-05-15 15:49:34.361304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110896 len:8 PRP1 0x0 PRP2 0x0 00:31:32.408 [2024-05-15 15:49:34.361317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.361330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.408 [2024-05-15 15:49:34.361341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.408 [2024-05-15 15:49:34.361352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110904 len:8 PRP1 0x0 PRP2 0x0 00:31:32.408 [2024-05-15 15:49:34.361365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.361378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.408 [2024-05-15 15:49:34.361389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.408 [2024-05-15 15:49:34.361400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110912 len:8 PRP1 0x0 PRP2 0x0 00:31:32.408 [2024-05-15 15:49:34.361412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.361425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.408 [2024-05-15 15:49:34.361436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.408 [2024-05-15 15:49:34.361447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110920 len:8 PRP1 0x0 PRP2 0x0 00:31:32.408 [2024-05-15 15:49:34.361459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.361472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.408 [2024-05-15 15:49:34.361483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.408 [2024-05-15 15:49:34.361494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110928 len:8 PRP1 0x0 PRP2 0x0 00:31:32.408 [2024-05-15 15:49:34.367500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.367533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.408 [2024-05-15 15:49:34.367546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.408 [2024-05-15 15:49:34.367559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110936 len:8 PRP1 0x0 PRP2 0x0 00:31:32.408 [2024-05-15 15:49:34.367573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.367586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.408 [2024-05-15 15:49:34.367597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.408 [2024-05-15 15:49:34.367608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110944 len:8 PRP1 0x0 PRP2 0x0 00:31:32.408 [2024-05-15 15:49:34.367621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.408 [2024-05-15 15:49:34.367634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.409 [2024-05-15 15:49:34.367644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.409 [2024-05-15 15:49:34.367655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110952 len:8 PRP1 0x0 PRP2 0x0 00:31:32.409 [2024-05-15 15:49:34.367674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.409 [2024-05-15 15:49:34.367688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.409 [2024-05-15 15:49:34.367699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.409 [2024-05-15 15:49:34.367710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110960 len:8 PRP1 0x0 PRP2 0x0 00:31:32.409 [2024-05-15 15:49:34.367722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.409 [2024-05-15 15:49:34.367735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.409 [2024-05-15 15:49:34.367745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.409 [2024-05-15 15:49:34.367757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110968 len:8 PRP1 0x0 PRP2 0x0 00:31:32.409 [2024-05-15 15:49:34.367769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.409 [2024-05-15 15:49:34.367782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.409 [2024-05-15 15:49:34.367792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.409 [2024-05-15 15:49:34.367804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110976 len:8 PRP1 0x0 PRP2 0x0 00:31:32.409 [2024-05-15 15:49:34.367816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.409 [2024-05-15 15:49:34.367829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.409 [2024-05-15 15:49:34.367839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.409 [2024-05-15 15:49:34.367850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110984 len:8 PRP1 0x0 PRP2 0x0 00:31:32.409 [2024-05-15 15:49:34.367863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.409 [2024-05-15 15:49:34.367875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.409 [2024-05-15 15:49:34.367886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.409 [2024-05-15 15:49:34.367897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110992 len:8 PRP1 0x0 PRP2 0x0 00:31:32.409 [2024-05-15 15:49:34.367910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.409 [2024-05-15 15:49:34.367922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.409 [2024-05-15 15:49:34.367933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.409 [2024-05-15 15:49:34.367944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111000 len:8 PRP1 0x0 PRP2 0x0 00:31:32.409 [2024-05-15 15:49:34.367957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.409 [2024-05-15 15:49:34.367970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.409 [2024-05-15 15:49:34.367980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.409 [2024-05-15 15:49:34.367991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111008 len:8 PRP1 0x0 PRP2 0x0 00:31:32.409 [2024-05-15 15:49:34.368004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.409 [2024-05-15 15:49:34.368016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.409 [2024-05-15 15:49:34.368027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.409 [2024-05-15 15:49:34.368041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111016 len:8 PRP1 0x0 PRP2 0x0 00:31:32.409 [2024-05-15 15:49:34.368054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.409 [2024-05-15 15:49:34.368067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.409 [2024-05-15 15:49:34.368078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.409 [2024-05-15 15:49:34.368089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111024 len:8 PRP1 0x0 PRP2 0x0 00:31:32.409 [2024-05-15 15:49:34.368102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.409 [2024-05-15 15:49:34.368114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.409 [2024-05-15 15:49:34.368125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.409 [2024-05-15 15:49:34.368136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111032 len:8 PRP1 0x0 PRP2 0x0 00:31:32.409 [2024-05-15 15:49:34.368149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.409 [2024-05-15 15:49:34.368162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.409 [2024-05-15 15:49:34.368172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.409 [2024-05-15 15:49:34.368183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111040 len:8 PRP1 0x0 PRP2 0x0 00:31:32.409 [2024-05-15 15:49:34.368196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.409 [2024-05-15 15:49:34.368208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.409 [2024-05-15 15:49:34.368227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.409 [2024-05-15 15:49:34.368240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111048 len:8 PRP1 0x0 PRP2 0x0 00:31:32.409 [2024-05-15 15:49:34.368253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.409 [2024-05-15 15:49:34.368266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.409 [2024-05-15 15:49:34.368276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.409 [2024-05-15 15:49:34.368287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111056 len:8 PRP1 0x0 PRP2 0x0 00:31:32.409 [2024-05-15 15:49:34.368300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.409 [2024-05-15 15:49:34.368313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.409 [2024-05-15 15:49:34.368324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.409 [2024-05-15 15:49:34.368335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111064 len:8 PRP1 0x0 PRP2 0x0 00:31:32.409 [2024-05-15 15:49:34.368348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.409 [2024-05-15 15:49:34.368361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.409 [2024-05-15 15:49:34.368371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.409 [2024-05-15 15:49:34.368382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111072 len:8 PRP1 0x0 PRP2 0x0 00:31:32.409 [2024-05-15 15:49:34.368395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.409 [2024-05-15 15:49:34.368412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.409 [2024-05-15 15:49:34.368423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.409 [2024-05-15 15:49:34.368434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111080 len:8 PRP1 0x0 PRP2 0x0 00:31:32.409 [2024-05-15 15:49:34.368447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.409 [2024-05-15 15:49:34.368460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.409 [2024-05-15 15:49:34.368470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.409 [2024-05-15 15:49:34.368481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111088 len:8 PRP1 0x0 PRP2 0x0 00:31:32.409 [2024-05-15 15:49:34.368494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.409 [2024-05-15 15:49:34.368506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.409 [2024-05-15 15:49:34.368517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.409 [2024-05-15 15:49:34.368528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111096 len:8 PRP1 0x0 PRP2 0x0 00:31:32.409 [2024-05-15 15:49:34.368541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.409 [2024-05-15 15:49:34.368554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.409 [2024-05-15 15:49:34.368565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.409 [2024-05-15 15:49:34.368576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111104 len:8 PRP1 0x0 PRP2 0x0 00:31:32.409 [2024-05-15 15:49:34.368589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.409 [2024-05-15 15:49:34.368602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.409 [2024-05-15 15:49:34.368612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.409 [2024-05-15 15:49:34.368623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111112 len:8 PRP1 0x0 PRP2 0x0 00:31:32.409 [2024-05-15 15:49:34.368636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.409 [2024-05-15 15:49:34.368648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.409 [2024-05-15 15:49:34.368659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.409 [2024-05-15 15:49:34.368670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111120 len:8 PRP1 0x0 PRP2 0x0 00:31:32.409 [2024-05-15 15:49:34.368682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.409 [2024-05-15 15:49:34.368695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.409 [2024-05-15 15:49:34.368705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.409 [2024-05-15 15:49:34.368716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111128 len:8 PRP1 0x0 PRP2 0x0 00:31:32.409 [2024-05-15 15:49:34.368729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.409 [2024-05-15 15:49:34.368741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.409 [2024-05-15 15:49:34.368752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.409 [2024-05-15 15:49:34.368763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111136 len:8 PRP1 0x0 PRP2 0x0 00:31:32.409 [2024-05-15 15:49:34.368779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.409 [2024-05-15 15:49:34.368792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.368803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.410 [2024-05-15 15:49:34.368814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111144 len:8 PRP1 0x0 PRP2 0x0 00:31:32.410 [2024-05-15 15:49:34.368827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.410 [2024-05-15 15:49:34.368840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.368850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.410 [2024-05-15 15:49:34.368862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111152 len:8 PRP1 0x0 PRP2 0x0 00:31:32.410 [2024-05-15 15:49:34.368874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.410 [2024-05-15 15:49:34.368887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.368897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.410 [2024-05-15 15:49:34.368908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111160 len:8 PRP1 0x0 PRP2 0x0 00:31:32.410 [2024-05-15 15:49:34.368921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.410 [2024-05-15 15:49:34.368934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.368945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.410 [2024-05-15 15:49:34.368955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111168 len:8 PRP1 0x0 PRP2 0x0 00:31:32.410 [2024-05-15 15:49:34.368968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.410 [2024-05-15 15:49:34.368981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.368991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.410 [2024-05-15 15:49:34.369002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111176 len:8 PRP1 0x0 PRP2 0x0 00:31:32.410 [2024-05-15 15:49:34.369014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.410 [2024-05-15 15:49:34.369027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.369037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.410 [2024-05-15 15:49:34.369048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111184 len:8 PRP1 0x0 PRP2 0x0 00:31:32.410 [2024-05-15 15:49:34.369061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.410 [2024-05-15 15:49:34.369074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.369084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.410 [2024-05-15 15:49:34.369096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111192 len:8 PRP1 0x0 PRP2 0x0 00:31:32.410 [2024-05-15 15:49:34.369108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.410 [2024-05-15 15:49:34.369121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.369132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.410 [2024-05-15 15:49:34.369146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111200 len:8 PRP1 0x0 PRP2 0x0 00:31:32.410 [2024-05-15 15:49:34.369159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.410 [2024-05-15 15:49:34.369172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.369183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.410 [2024-05-15 15:49:34.369194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111208 len:8 PRP1 0x0 PRP2 0x0 00:31:32.410 [2024-05-15 15:49:34.369207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.410 [2024-05-15 15:49:34.369226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.369238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.410 [2024-05-15 15:49:34.369250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110584 len:8 PRP1 0x0 PRP2 0x0 00:31:32.410 [2024-05-15 15:49:34.369263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.410 [2024-05-15 15:49:34.369275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.369286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.410 [2024-05-15 15:49:34.369297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110592 len:8 PRP1 0x0 PRP2 0x0 00:31:32.410 [2024-05-15 15:49:34.369310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.410 [2024-05-15 15:49:34.369322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.369333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.410 [2024-05-15 15:49:34.369344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110600 len:8 PRP1 0x0 PRP2 0x0 00:31:32.410 [2024-05-15 15:49:34.369357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.410 [2024-05-15 15:49:34.369369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.369380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.410 [2024-05-15 15:49:34.369391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110608 len:8 PRP1 0x0 PRP2 0x0 00:31:32.410 [2024-05-15 15:49:34.369404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.410 [2024-05-15 15:49:34.369417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.369427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.410 [2024-05-15 15:49:34.369438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110616 len:8 PRP1 0x0 PRP2 0x0 00:31:32.410 [2024-05-15 15:49:34.369451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.410 [2024-05-15 15:49:34.369464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.369474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.410 [2024-05-15 15:49:34.369485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110624 len:8 PRP1 0x0 PRP2 0x0 00:31:32.410 [2024-05-15 15:49:34.369498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.410 [2024-05-15 15:49:34.369511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.369525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.410 [2024-05-15 15:49:34.369537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110632 len:8 PRP1 0x0 PRP2 0x0 00:31:32.410 [2024-05-15 15:49:34.369549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.410 [2024-05-15 15:49:34.369562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.369573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.410 [2024-05-15 15:49:34.369584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111216 len:8 PRP1 0x0 PRP2 0x0 00:31:32.410 [2024-05-15 15:49:34.369596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.410 [2024-05-15 15:49:34.369609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.369620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.410 [2024-05-15 15:49:34.369631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111224 len:8 PRP1 0x0 PRP2 0x0 00:31:32.410 [2024-05-15 15:49:34.369643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.410 [2024-05-15 15:49:34.369656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.369667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.410 [2024-05-15 15:49:34.369678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111232 len:8 PRP1 0x0 PRP2 0x0 00:31:32.410 [2024-05-15 15:49:34.369690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.410 [2024-05-15 15:49:34.369703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.369714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.410 [2024-05-15 15:49:34.369725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111240 len:8 PRP1 0x0 PRP2 0x0 00:31:32.410 [2024-05-15 15:49:34.369738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.410 [2024-05-15 15:49:34.369750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.369761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.410 [2024-05-15 15:49:34.369772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111248 len:8 PRP1 0x0 PRP2 0x0 00:31:32.410 [2024-05-15 15:49:34.369784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.410 [2024-05-15 15:49:34.369797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.369807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.410 [2024-05-15 15:49:34.369819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111256 len:8 PRP1 0x0 PRP2 0x0 00:31:32.410 [2024-05-15 15:49:34.369831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.410 [2024-05-15 15:49:34.369844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.369855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.410 [2024-05-15 15:49:34.369866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111264 len:8 PRP1 0x0 PRP2 0x0 00:31:32.410 [2024-05-15 15:49:34.369878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.410 [2024-05-15 15:49:34.369895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.369906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.410 [2024-05-15 15:49:34.369917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111272 len:8 PRP1 0x0 PRP2 0x0 00:31:32.410 [2024-05-15 15:49:34.369930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.410 [2024-05-15 15:49:34.369942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.410 [2024-05-15 15:49:34.369953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.411 [2024-05-15 15:49:34.369965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111280 len:8 PRP1 0x0 PRP2 0x0 00:31:32.411 [2024-05-15 15:49:34.369977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:34.369990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.411 [2024-05-15 15:49:34.370000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.411 [2024-05-15 15:49:34.370011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111288 len:8 PRP1 0x0 PRP2 0x0 00:31:32.411 [2024-05-15 15:49:34.370024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:34.370036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.411 [2024-05-15 15:49:34.370047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.411 [2024-05-15 15:49:34.370058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111296 len:8 PRP1 0x0 PRP2 0x0 00:31:32.411 [2024-05-15 15:49:34.370070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:34.370083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.411 [2024-05-15 15:49:34.370093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.411 [2024-05-15 15:49:34.370104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111304 len:8 PRP1 0x0 PRP2 0x0 00:31:32.411 [2024-05-15 15:49:34.370117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:34.370130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.411 [2024-05-15 15:49:34.370140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.411 [2024-05-15 15:49:34.370151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111312 len:8 PRP1 0x0 PRP2 0x0 00:31:32.411 [2024-05-15 15:49:34.370164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:34.370177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.411 [2024-05-15 15:49:34.370188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.411 [2024-05-15 15:49:34.370199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111320 len:8 PRP1 0x0 PRP2 0x0 00:31:32.411 [2024-05-15 15:49:34.370212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:34.370288] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b89440 was disconnected and freed. reset controller. 00:31:32.411 [2024-05-15 15:49:34.370307] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:32.411 [2024-05-15 15:49:34.370348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.411 [2024-05-15 15:49:34.370367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:34.370383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.411 [2024-05-15 15:49:34.370396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:34.370410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.411 [2024-05-15 15:49:34.370423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:34.370437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.411 [2024-05-15 15:49:34.370449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:34.370463] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:32.411 [2024-05-15 15:49:34.370504] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b66240 (9): Bad file descriptor 00:31:32.411 [2024-05-15 15:49:34.373764] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:32.411 [2024-05-15 15:49:34.412040] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:32.411 [2024-05-15 15:49:38.892158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:38.892236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:38.892271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:38.892301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:38.892329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:38.892358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:38.892387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:38.892427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:38.892457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:38.892485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:38.892514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:38.892543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:38560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:38.892572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:38.892601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:38.892629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:38.892673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:38.892700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:38.892729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:38.892757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:38.892785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:38.892821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:38.892849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:38.892877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:38.892905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.411 [2024-05-15 15:49:38.892933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.411 [2024-05-15 15:49:38.892946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.892960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.892974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.892988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.412 [2024-05-15 15:49:38.893808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.412 [2024-05-15 15:49:38.893836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.412 [2024-05-15 15:49:38.893864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.412 [2024-05-15 15:49:38.893892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.412 [2024-05-15 15:49:38.893923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.412 [2024-05-15 15:49:38.893951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.412 [2024-05-15 15:49:38.893979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.412 [2024-05-15 15:49:38.893993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.413 [2024-05-15 15:49:38.894319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:38128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.894983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.894998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.895011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.895026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.895043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.895059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.895074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.895089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.895103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.895118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.895132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.895147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.895161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.895176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.413 [2024-05-15 15:49:38.895189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.413 [2024-05-15 15:49:38.895204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.414 [2024-05-15 15:49:38.895920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.414 [2024-05-15 15:49:38.895948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.895963] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b618b0 is same with the state(5) to be set 00:31:32.414 [2024-05-15 15:49:38.895982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.414 [2024-05-15 15:49:38.895993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.414 [2024-05-15 15:49:38.896005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38920 len:8 PRP1 0x0 PRP2 0x0 00:31:32.414 [2024-05-15 15:49:38.896019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.896087] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b618b0 was disconnected and freed. reset controller. 00:31:32.414 [2024-05-15 15:49:38.896104] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:32.414 [2024-05-15 15:49:38.896137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.414 [2024-05-15 15:49:38.896155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.896170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.414 [2024-05-15 15:49:38.896183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.896197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.414 [2024-05-15 15:49:38.896210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.896233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.414 [2024-05-15 15:49:38.896252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.414 [2024-05-15 15:49:38.896266] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:32.414 [2024-05-15 15:49:38.899543] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:32.414 [2024-05-15 15:49:38.899583] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b66240 (9): Bad file descriptor 00:31:32.414 [2024-05-15 15:49:39.062946] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:32.414 00:31:32.414 Latency(us) 00:31:32.414 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:32.414 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:32.414 Verification LBA range: start 0x0 length 0x4000 00:31:32.414 NVMe0n1 : 15.01 8385.37 32.76 1027.69 0.00 13570.43 755.48 25826.04 00:31:32.414 =================================================================================================================== 00:31:32.414 Total : 8385.37 32.76 1027.69 0.00 13570.43 755.48 25826.04 00:31:32.414 Received shutdown signal, test time was about 15.000000 seconds 00:31:32.414 00:31:32.414 Latency(us) 00:31:32.414 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:32.414 =================================================================================================================== 00:31:32.414 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:32.414 15:49:44 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:32.414 15:49:44 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:32.414 15:49:44 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:32.414 15:49:44 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1436959 00:31:32.414 15:49:44 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:32.414 15:49:44 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1436959 /var/tmp/bdevperf.sock 00:31:32.414 15:49:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1436959 ']' 00:31:32.414 15:49:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:32.415 15:49:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:32.415 15:49:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:32.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:32.415 15:49:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:32.415 15:49:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:32.415 15:49:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:32.415 15:49:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:31:32.415 15:49:45 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:32.415 [2024-05-15 15:49:45.467519] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:32.415 15:49:45 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:32.672 [2024-05-15 15:49:45.700162] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:32.672 15:49:45 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:32.930 NVMe0n1 00:31:32.930 15:49:46 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:33.495 00:31:33.495 15:49:46 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:33.753 00:31:33.753 15:49:46 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:33.753 15:49:46 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:34.011 15:49:46 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:34.011 15:49:47 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:37.290 15:49:50 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:37.290 15:49:50 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:37.549 15:49:50 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1437623 00:31:37.549 15:49:50 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:37.549 15:49:50 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1437623 00:31:38.483 0 00:31:38.483 15:49:51 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:38.483 [2024-05-15 15:49:44.989189] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:31:38.483 [2024-05-15 15:49:44.989284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1436959 ] 00:31:38.483 EAL: No free 2048 kB hugepages reported on node 1 00:31:38.483 [2024-05-15 15:49:45.024855] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:38.483 [2024-05-15 15:49:45.057577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.483 [2024-05-15 15:49:45.138375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.483 [2024-05-15 15:49:47.083163] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:38.483 [2024-05-15 15:49:47.083263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:38.483 [2024-05-15 15:49:47.083302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.483 [2024-05-15 15:49:47.083320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:38.483 [2024-05-15 15:49:47.083334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.483 [2024-05-15 15:49:47.083348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:38.483 [2024-05-15 15:49:47.083361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.483 [2024-05-15 15:49:47.083375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:38.483 [2024-05-15 15:49:47.083389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.483 [2024-05-15 15:49:47.083402] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.483 [2024-05-15 15:49:47.083439] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.483 [2024-05-15 15:49:47.083469] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x693240 (9): Bad file descriptor 00:31:38.483 [2024-05-15 15:49:47.090408] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:38.483 Running I/O for 1 seconds... 00:31:38.483 00:31:38.483 Latency(us) 00:31:38.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:38.483 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:38.483 Verification LBA range: start 0x0 length 0x4000 00:31:38.483 NVMe0n1 : 1.00 8324.56 32.52 0.00 0.00 15312.32 3131.16 15631.55 00:31:38.483 =================================================================================================================== 00:31:38.483 Total : 8324.56 32.52 0.00 0.00 15312.32 3131.16 15631.55 00:31:38.483 15:49:51 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:38.483 15:49:51 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:38.740 15:49:51 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:38.998 15:49:52 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:38.998 15:49:52 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:39.255 15:49:52 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:39.513 15:49:52 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:42.792 15:49:55 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:42.792 15:49:55 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:42.792 15:49:55 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1436959 00:31:42.792 15:49:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1436959 ']' 00:31:42.792 15:49:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1436959 00:31:42.792 15:49:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:42.792 15:49:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:42.792 15:49:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1436959 00:31:42.792 15:49:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:42.792 15:49:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:42.792 15:49:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1436959' 00:31:42.792 killing process with pid 1436959 00:31:42.792 15:49:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1436959 00:31:42.792 15:49:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1436959 00:31:43.050 15:49:55 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:43.050 15:49:55 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:43.320 15:49:56 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:43.320 15:49:56 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:43.320 15:49:56 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:43.320 15:49:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:43.320 15:49:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:43.320 15:49:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:43.320 15:49:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:43.320 15:49:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:43.320 15:49:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:43.320 rmmod nvme_tcp 00:31:43.320 rmmod nvme_fabrics 00:31:43.320 rmmod nvme_keyring 00:31:43.320 15:49:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:43.320 15:49:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:43.320 15:49:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:43.320 15:49:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1434818 ']' 00:31:43.320 15:49:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1434818 00:31:43.320 15:49:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1434818 ']' 00:31:43.320 15:49:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1434818 00:31:43.320 15:49:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:43.320 15:49:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:43.320 15:49:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1434818 00:31:43.320 15:49:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:43.320 15:49:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:43.320 15:49:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1434818' 00:31:43.320 killing process with pid 1434818 00:31:43.320 15:49:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1434818 00:31:43.320 [2024-05-15 15:49:56.296958] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:43.320 15:49:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1434818 00:31:43.622 15:49:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:43.622 15:49:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:43.622 15:49:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:43.622 15:49:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:43.622 15:49:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:43.622 15:49:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:43.622 15:49:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:43.622 15:49:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.526 15:49:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:45.526 00:31:45.526 real 0m34.816s 00:31:45.526 user 2m1.250s 00:31:45.526 sys 0m5.844s 00:31:45.526 15:49:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:45.526 15:49:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:45.526 ************************************ 00:31:45.526 END TEST nvmf_failover 00:31:45.526 ************************************ 00:31:45.526 15:49:58 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:45.526 15:49:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:45.526 15:49:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:45.526 15:49:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:45.785 ************************************ 00:31:45.785 START TEST nvmf_host_discovery 00:31:45.785 ************************************ 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:45.785 * Looking for test storage... 00:31:45.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:45.785 15:49:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:48.323 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:48.323 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:48.324 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:48.324 Found net devices under 0000:09:00.0: cvl_0_0 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:48.324 Found net devices under 0000:09:00.1: cvl_0_1 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:48.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:48.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:31:48.324 00:31:48.324 --- 10.0.0.2 ping statistics --- 00:31:48.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.324 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:48.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:48.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:31:48.324 00:31:48.324 --- 10.0.0.1 ping statistics --- 00:31:48.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.324 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1440634 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1440634 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 1440634 ']' 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:48.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:48.324 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:48.324 [2024-05-15 15:50:01.370751] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:31:48.324 [2024-05-15 15:50:01.370838] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:48.324 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.324 [2024-05-15 15:50:01.414136] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:48.582 [2024-05-15 15:50:01.447420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.582 [2024-05-15 15:50:01.534354] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:48.582 [2024-05-15 15:50:01.534408] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:48.582 [2024-05-15 15:50:01.534437] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:48.582 [2024-05-15 15:50:01.534449] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:48.583 [2024-05-15 15:50:01.534460] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:48.583 [2024-05-15 15:50:01.534502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.583 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:48.583 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:48.583 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:48.583 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:48.583 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:48.583 15:50:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:48.583 15:50:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:48.583 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.583 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:48.583 [2024-05-15 15:50:01.681860] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:48.841 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.841 15:50:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:48.841 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.841 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:48.841 [2024-05-15 15:50:01.689799] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:48.841 [2024-05-15 15:50:01.690085] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:48.841 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.841 15:50:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:48.841 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.841 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:48.841 null0 00:31:48.841 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.841 15:50:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:48.841 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.841 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:48.841 null1 00:31:48.841 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.841 15:50:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:48.841 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.841 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:48.841 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.841 15:50:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1440659 00:31:48.841 15:50:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1440659 /tmp/host.sock 00:31:48.841 15:50:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:48.841 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 1440659 ']' 00:31:48.841 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:31:48.841 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:48.841 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:48.841 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:48.842 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:48.842 15:50:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:48.842 [2024-05-15 15:50:01.764053] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:31:48.842 [2024-05-15 15:50:01.764128] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1440659 ] 00:31:48.842 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.842 [2024-05-15 15:50:01.799347] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:48.842 [2024-05-15 15:50:01.831675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.842 [2024-05-15 15:50:01.915799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:49.101 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.359 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:49.359 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:49.359 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.359 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.359 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.359 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.360 [2024-05-15 15:50:02.303736] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:49.360 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.618 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:31:49.618 15:50:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:50.184 [2024-05-15 15:50:03.090395] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:50.184 [2024-05-15 15:50:03.090423] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:50.184 [2024-05-15 15:50:03.090451] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:50.184 [2024-05-15 15:50:03.177775] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:50.184 [2024-05-15 15:50:03.280753] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:50.184 [2024-05-15 15:50:03.280790] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:50.441 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:50.699 15:50:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:51.631 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:51.631 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:51.631 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:51.631 15:50:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:51.631 15:50:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:51.631 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.631 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.887 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.887 15:50:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:51.887 15:50:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:51.887 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:51.887 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.888 [2024-05-15 15:50:04.771175] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:51.888 [2024-05-15 15:50:04.771785] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:51.888 [2024-05-15 15:50:04.771839] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.888 [2024-05-15 15:50:04.857487] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:51.888 15:50:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:51.888 [2024-05-15 15:50:04.914995] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:51.888 [2024-05-15 15:50:04.915020] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:51.888 [2024-05-15 15:50:04.915031] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:52.818 15:50:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:52.818 15:50:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:52.818 15:50:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:52.818 15:50:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:52.818 15:50:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.818 15:50:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:52.818 15:50:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:52.818 15:50:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:52.818 15:50:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:53.077 15:50:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.077 15:50:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:53.077 15:50:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:53.077 15:50:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:53.077 15:50:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:53.077 15:50:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:53.077 15:50:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:53.077 15:50:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:53.077 15:50:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:53.077 15:50:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:53.077 15:50:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:53.077 15:50:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:53.077 15:50:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.077 15:50:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:53.077 15:50:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:53.077 15:50:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.077 15:50:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:53.077 15:50:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:53.077 15:50:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:53.077 15:50:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:53.077 15:50:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:53.077 15:50:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.077 15:50:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:53.077 [2024-05-15 15:50:05.999636] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:53.077 [2024-05-15 15:50:05.999670] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:53.077 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.077 15:50:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:53.077 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:53.077 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:53.077 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:53.077 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:53.077 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:53.077 [2024-05-15 15:50:06.004980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.077 [2024-05-15 15:50:06.005015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.077 [2024-05-15 15:50:06.005034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.077 [2024-05-15 15:50:06.005050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.077 [2024-05-15 15:50:06.005066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.077 [2024-05-15 15:50:06.005082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.077 [2024-05-15 15:50:06.005098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.077 [2024-05-15 15:50:06.005113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.077 [2024-05-15 15:50:06.005127] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2496600 is same with the state(5) to be set 00:31:53.077 15:50:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:53.077 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.077 15:50:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:53.078 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:53.078 15:50:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:53.078 15:50:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:53.078 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.078 [2024-05-15 15:50:06.014977] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2496600 (9): Bad file descriptor 00:31:53.078 [2024-05-15 15:50:06.025026] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:53.078 [2024-05-15 15:50:06.025281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.078 [2024-05-15 15:50:06.025429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.078 [2024-05-15 15:50:06.025455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2496600 with addr=10.0.0.2, port=4420 00:31:53.078 [2024-05-15 15:50:06.025473] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2496600 is same with the state(5) to be set 00:31:53.078 [2024-05-15 15:50:06.025496] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2496600 (9): Bad file descriptor 00:31:53.078 [2024-05-15 15:50:06.025524] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:53.078 [2024-05-15 15:50:06.025554] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:53.078 [2024-05-15 15:50:06.025582] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:53.078 [2024-05-15 15:50:06.025605] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:53.078 [2024-05-15 15:50:06.035107] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:53.078 [2024-05-15 15:50:06.035303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.078 [2024-05-15 15:50:06.035434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.078 [2024-05-15 15:50:06.035460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2496600 with addr=10.0.0.2, port=4420 00:31:53.078 [2024-05-15 15:50:06.035476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2496600 is same with the state(5) to be set 00:31:53.078 [2024-05-15 15:50:06.035498] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2496600 (9): Bad file descriptor 00:31:53.078 [2024-05-15 15:50:06.035533] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:53.078 [2024-05-15 15:50:06.035550] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:53.078 [2024-05-15 15:50:06.035564] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:53.078 [2024-05-15 15:50:06.035583] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:53.078 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.078 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:53.078 15:50:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:53.078 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:53.078 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:53.078 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:53.078 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:53.078 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:53.078 15:50:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:53.078 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.078 15:50:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:53.078 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:53.078 15:50:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:53.078 15:50:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:53.078 [2024-05-15 15:50:06.045181] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:53.078 [2024-05-15 15:50:06.045372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.078 [2024-05-15 15:50:06.045527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.078 [2024-05-15 15:50:06.045553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2496600 with addr=10.0.0.2, port=4420 00:31:53.078 [2024-05-15 15:50:06.045569] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2496600 is same with the state(5) to be set 00:31:53.078 [2024-05-15 15:50:06.045591] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2496600 (9): Bad file descriptor 00:31:53.078 [2024-05-15 15:50:06.045612] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:53.078 [2024-05-15 15:50:06.045630] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:53.078 [2024-05-15 15:50:06.045644] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:53.078 [2024-05-15 15:50:06.045663] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:53.078 [2024-05-15 15:50:06.055277] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:53.078 [2024-05-15 15:50:06.055434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.078 [2024-05-15 15:50:06.055569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.078 [2024-05-15 15:50:06.055595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2496600 with addr=10.0.0.2, port=4420 00:31:53.078 [2024-05-15 15:50:06.055611] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2496600 is same with the state(5) to be set 00:31:53.078 [2024-05-15 15:50:06.055633] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2496600 (9): Bad file descriptor 00:31:53.078 [2024-05-15 15:50:06.055667] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:53.078 [2024-05-15 15:50:06.055684] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:53.078 [2024-05-15 15:50:06.055697] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:53.078 [2024-05-15 15:50:06.055716] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:53.078 [2024-05-15 15:50:06.065349] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:53.078 [2024-05-15 15:50:06.065531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.078 [2024-05-15 15:50:06.065652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.078 [2024-05-15 15:50:06.065678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2496600 with addr=10.0.0.2, port=4420 00:31:53.078 [2024-05-15 15:50:06.065694] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2496600 is same with the state(5) to be set 00:31:53.078 [2024-05-15 15:50:06.065716] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2496600 (9): Bad file descriptor 00:31:53.078 [2024-05-15 15:50:06.065736] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:53.078 [2024-05-15 15:50:06.065750] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:53.079 [2024-05-15 15:50:06.065763] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:53.079 [2024-05-15 15:50:06.065782] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:53.079 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.079 [2024-05-15 15:50:06.075420] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:53.079 [2024-05-15 15:50:06.075686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.079 [2024-05-15 15:50:06.075829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.079 [2024-05-15 15:50:06.075855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2496600 with addr=10.0.0.2, port=4420 00:31:53.079 [2024-05-15 15:50:06.075871] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2496600 is same with the state(5) to be set 00:31:53.079 [2024-05-15 15:50:06.075893] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2496600 (9): Bad file descriptor 00:31:53.079 [2024-05-15 15:50:06.075926] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:53.079 [2024-05-15 15:50:06.075949] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:53.079 [2024-05-15 15:50:06.075963] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:53.079 [2024-05-15 15:50:06.075983] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:53.079 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:53.079 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:53.079 15:50:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:53.079 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:53.079 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:53.079 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:53.079 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:53.079 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:53.079 15:50:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:53.079 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.079 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:53.079 15:50:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:53.079 15:50:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:53.079 15:50:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:53.079 [2024-05-15 15:50:06.085523] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:53.079 [2024-05-15 15:50:06.085717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.079 [2024-05-15 15:50:06.085883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.079 [2024-05-15 15:50:06.085910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2496600 with addr=10.0.0.2, port=4420 00:31:53.079 [2024-05-15 15:50:06.085926] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2496600 is same with the state(5) to be set 00:31:53.079 [2024-05-15 15:50:06.085948] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2496600 (9): Bad file descriptor 00:31:53.079 [2024-05-15 15:50:06.085969] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:53.079 [2024-05-15 15:50:06.085983] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:53.079 [2024-05-15 15:50:06.085996] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:53.079 [2024-05-15 15:50:06.086014] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:53.079 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.079 [2024-05-15 15:50:06.095604] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:53.079 [2024-05-15 15:50:06.095824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.079 [2024-05-15 15:50:06.095965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.079 [2024-05-15 15:50:06.095990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2496600 with addr=10.0.0.2, port=4420 00:31:53.079 [2024-05-15 15:50:06.096007] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2496600 is same with the state(5) to be set 00:31:53.079 [2024-05-15 15:50:06.096028] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2496600 (9): Bad file descriptor 00:31:53.079 [2024-05-15 15:50:06.096068] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:53.079 [2024-05-15 15:50:06.096085] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:53.079 [2024-05-15 15:50:06.096099] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:53.079 [2024-05-15 15:50:06.096117] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:53.079 [2024-05-15 15:50:06.105679] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:53.079 [2024-05-15 15:50:06.105865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.079 [2024-05-15 15:50:06.106033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.079 [2024-05-15 15:50:06.106059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2496600 with addr=10.0.0.2, port=4420 00:31:53.079 [2024-05-15 15:50:06.106074] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2496600 is same with the state(5) to be set 00:31:53.079 [2024-05-15 15:50:06.106096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2496600 (9): Bad file descriptor 00:31:53.079 [2024-05-15 15:50:06.106116] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:53.079 [2024-05-15 15:50:06.106130] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:53.079 [2024-05-15 15:50:06.106143] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:53.079 [2024-05-15 15:50:06.106161] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:53.079 [2024-05-15 15:50:06.115749] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:53.079 [2024-05-15 15:50:06.115968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.079 [2024-05-15 15:50:06.116091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.079 [2024-05-15 15:50:06.116118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2496600 with addr=10.0.0.2, port=4420 00:31:53.079 [2024-05-15 15:50:06.116135] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2496600 is same with the state(5) to be set 00:31:53.079 [2024-05-15 15:50:06.116157] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2496600 (9): Bad file descriptor 00:31:53.079 [2024-05-15 15:50:06.116190] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:53.079 [2024-05-15 15:50:06.116207] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:53.079 [2024-05-15 15:50:06.116231] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:53.079 [2024-05-15 15:50:06.116251] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:53.079 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:31:53.079 15:50:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:53.080 [2024-05-15 15:50:06.125837] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:53.080 [2024-05-15 15:50:06.126038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.080 [2024-05-15 15:50:06.126188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.080 [2024-05-15 15:50:06.126230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2496600 with addr=10.0.0.2, port=4420 00:31:53.080 [2024-05-15 15:50:06.126267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2496600 is same with the state(5) to be set 00:31:53.080 [2024-05-15 15:50:06.126289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2496600 (9): Bad file descriptor 00:31:53.080 [2024-05-15 15:50:06.126315] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:53.080 [2024-05-15 15:50:06.126330] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:53.080 [2024-05-15 15:50:06.126343] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:53.080 [2024-05-15 15:50:06.126362] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:53.080 [2024-05-15 15:50:06.126560] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:53.080 [2024-05-15 15:50:06.126588] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:54.451 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:54.452 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:54.452 15:50:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:54.452 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.452 15:50:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:55.383 [2024-05-15 15:50:08.424946] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:55.383 [2024-05-15 15:50:08.424989] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:55.383 [2024-05-15 15:50:08.425014] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:55.640 [2024-05-15 15:50:08.513288] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:55.640 [2024-05-15 15:50:08.577669] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:55.640 [2024-05-15 15:50:08.577724] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:55.640 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.640 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:55.641 request: 00:31:55.641 { 00:31:55.641 "name": "nvme", 00:31:55.641 "trtype": "tcp", 00:31:55.641 "traddr": "10.0.0.2", 00:31:55.641 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:55.641 "adrfam": "ipv4", 00:31:55.641 "trsvcid": "8009", 00:31:55.641 "wait_for_attach": true, 00:31:55.641 "method": "bdev_nvme_start_discovery", 00:31:55.641 "req_id": 1 00:31:55.641 } 00:31:55.641 Got JSON-RPC error response 00:31:55.641 response: 00:31:55.641 { 00:31:55.641 "code": -17, 00:31:55.641 "message": "File exists" 00:31:55.641 } 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:55.641 request: 00:31:55.641 { 00:31:55.641 "name": "nvme_second", 00:31:55.641 "trtype": "tcp", 00:31:55.641 "traddr": "10.0.0.2", 00:31:55.641 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:55.641 "adrfam": "ipv4", 00:31:55.641 "trsvcid": "8009", 00:31:55.641 "wait_for_attach": true, 00:31:55.641 "method": "bdev_nvme_start_discovery", 00:31:55.641 "req_id": 1 00:31:55.641 } 00:31:55.641 Got JSON-RPC error response 00:31:55.641 response: 00:31:55.641 { 00:31:55.641 "code": -17, 00:31:55.641 "message": "File exists" 00:31:55.641 } 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:55.641 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.898 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:55.898 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:55.898 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:55.898 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.898 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:55.898 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:55.898 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:55.898 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:55.898 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.898 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:55.898 15:50:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:55.898 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:55.898 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:55.898 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:55.898 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:55.898 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:55.898 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:55.898 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:55.898 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.898 15:50:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:56.830 [2024-05-15 15:50:09.805457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.830 [2024-05-15 15:50:09.805637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.830 [2024-05-15 15:50:09.805680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2631950 with addr=10.0.0.2, port=8010 00:31:56.830 [2024-05-15 15:50:09.805712] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:56.830 [2024-05-15 15:50:09.805729] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:56.830 [2024-05-15 15:50:09.805743] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:57.762 [2024-05-15 15:50:10.807890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.762 [2024-05-15 15:50:10.808126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.762 [2024-05-15 15:50:10.808156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c1080 with addr=10.0.0.2, port=8010 00:31:57.762 [2024-05-15 15:50:10.808190] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:57.762 [2024-05-15 15:50:10.808207] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:57.763 [2024-05-15 15:50:10.808232] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:59.135 [2024-05-15 15:50:11.810042] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:59.135 request: 00:31:59.135 { 00:31:59.135 "name": "nvme_second", 00:31:59.135 "trtype": "tcp", 00:31:59.135 "traddr": "10.0.0.2", 00:31:59.135 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:59.135 "adrfam": "ipv4", 00:31:59.135 "trsvcid": "8010", 00:31:59.135 "attach_timeout_ms": 3000, 00:31:59.135 "method": "bdev_nvme_start_discovery", 00:31:59.135 "req_id": 1 00:31:59.135 } 00:31:59.135 Got JSON-RPC error response 00:31:59.135 response: 00:31:59.135 { 00:31:59.135 "code": -110, 00:31:59.135 "message": "Connection timed out" 00:31:59.135 } 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1440659 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:59.135 rmmod nvme_tcp 00:31:59.135 rmmod nvme_fabrics 00:31:59.135 rmmod nvme_keyring 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1440634 ']' 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1440634 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 1440634 ']' 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 1440634 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1440634 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1440634' 00:31:59.135 killing process with pid 1440634 00:31:59.135 15:50:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 1440634 00:31:59.135 [2024-05-15 15:50:11.944135] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:59.136 15:50:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 1440634 00:31:59.136 15:50:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:59.136 15:50:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:59.136 15:50:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:59.136 15:50:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:59.136 15:50:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:59.136 15:50:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.136 15:50:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:59.136 15:50:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:01.701 00:32:01.701 real 0m15.581s 00:32:01.701 user 0m23.009s 00:32:01.701 sys 0m3.265s 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:01.701 ************************************ 00:32:01.701 END TEST nvmf_host_discovery 00:32:01.701 ************************************ 00:32:01.701 15:50:14 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:01.701 15:50:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:01.701 15:50:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:01.701 15:50:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:01.701 ************************************ 00:32:01.701 START TEST nvmf_host_multipath_status 00:32:01.701 ************************************ 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:01.701 * Looking for test storage... 00:32:01.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:01.701 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:01.702 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:01.702 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:01.702 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:01.702 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:01.702 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:01.702 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.702 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:01.702 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.702 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:01.702 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:01.702 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:32:01.702 15:50:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:04.234 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:04.234 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.234 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:04.235 Found net devices under 0000:09:00.0: cvl_0_0 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:04.235 Found net devices under 0000:09:00.1: cvl_0_1 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:04.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:04.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:32:04.235 00:32:04.235 --- 10.0.0.2 ping statistics --- 00:32:04.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.235 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:04.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:04.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:32:04.235 00:32:04.235 --- 10.0.0.1 ping statistics --- 00:32:04.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.235 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1444249 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1444249 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 1444249 ']' 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:04.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:04.235 15:50:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:04.235 [2024-05-15 15:50:17.039321] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:32:04.235 [2024-05-15 15:50:17.039396] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:04.235 EAL: No free 2048 kB hugepages reported on node 1 00:32:04.235 [2024-05-15 15:50:17.081771] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:04.235 [2024-05-15 15:50:17.114647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:04.235 [2024-05-15 15:50:17.197993] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:04.236 [2024-05-15 15:50:17.198076] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:04.236 [2024-05-15 15:50:17.198104] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:04.236 [2024-05-15 15:50:17.198116] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:04.236 [2024-05-15 15:50:17.198125] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:04.236 [2024-05-15 15:50:17.198246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.236 [2024-05-15 15:50:17.198253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:04.236 15:50:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:04.236 15:50:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:32:04.236 15:50:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:04.236 15:50:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:04.236 15:50:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:04.236 15:50:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:04.236 15:50:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1444249 00:32:04.236 15:50:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:04.494 [2024-05-15 15:50:17.556506] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:04.494 15:50:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:04.752 Malloc0 00:32:04.752 15:50:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:05.010 15:50:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:05.268 15:50:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:05.526 [2024-05-15 15:50:18.575232] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:32:05.526 [2024-05-15 15:50:18.575576] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:05.526 15:50:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:05.784 [2024-05-15 15:50:18.816084] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:05.784 15:50:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1444530 00:32:05.784 15:50:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:05.784 15:50:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:05.784 15:50:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1444530 /var/tmp/bdevperf.sock 00:32:05.784 15:50:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 1444530 ']' 00:32:05.784 15:50:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:05.784 15:50:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:05.784 15:50:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:05.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:05.784 15:50:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:05.784 15:50:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:06.042 15:50:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:06.042 15:50:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:32:06.042 15:50:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:06.299 15:50:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:32:06.865 Nvme0n1 00:32:06.865 15:50:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:07.431 Nvme0n1 00:32:07.431 15:50:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:07.431 15:50:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:09.328 15:50:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:09.328 15:50:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:09.585 15:50:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:09.843 15:50:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:10.777 15:50:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:10.777 15:50:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:10.777 15:50:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.777 15:50:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:11.035 15:50:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.035 15:50:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:11.035 15:50:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.035 15:50:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:11.292 15:50:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:11.292 15:50:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:11.292 15:50:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.292 15:50:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:11.550 15:50:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.550 15:50:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:11.550 15:50:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.550 15:50:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:11.808 15:50:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.808 15:50:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:11.808 15:50:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.808 15:50:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:12.065 15:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:12.065 15:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:12.065 15:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.065 15:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:12.335 15:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:12.335 15:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:12.335 15:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:12.598 15:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:12.855 15:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:13.789 15:50:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:13.789 15:50:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:13.789 15:50:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:13.789 15:50:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.047 15:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:14.047 15:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:14.047 15:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.047 15:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:14.305 15:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.305 15:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:14.305 15:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.305 15:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:14.562 15:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.562 15:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:14.562 15:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.562 15:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:14.820 15:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.820 15:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:14.820 15:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.820 15:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:15.078 15:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:15.078 15:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:15.078 15:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:15.078 15:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:15.336 15:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:15.336 15:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:32:15.336 15:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:15.594 15:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:15.853 15:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:32:17.263 15:50:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:32:17.263 15:50:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:17.263 15:50:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.263 15:50:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:17.263 15:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.263 15:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:17.263 15:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.263 15:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:17.544 15:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:17.544 15:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:17.544 15:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.544 15:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:17.802 15:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.802 15:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:17.802 15:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.802 15:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:18.060 15:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:18.060 15:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:18.060 15:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:18.060 15:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:18.317 15:50:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:18.317 15:50:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:18.317 15:50:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:18.317 15:50:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:18.575 15:50:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:18.575 15:50:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:32:18.575 15:50:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:18.575 15:50:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:18.833 15:50:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:20.206 15:50:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:20.206 15:50:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:20.206 15:50:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.206 15:50:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:20.206 15:50:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.206 15:50:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:20.206 15:50:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.206 15:50:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:20.464 15:50:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:20.464 15:50:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:20.464 15:50:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.464 15:50:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:20.721 15:50:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.721 15:50:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:20.721 15:50:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.721 15:50:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:20.979 15:50:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.979 15:50:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:20.979 15:50:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.979 15:50:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:21.236 15:50:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:21.236 15:50:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:21.237 15:50:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:21.237 15:50:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:21.493 15:50:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:21.493 15:50:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:21.493 15:50:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:21.751 15:50:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:22.009 15:50:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:22.942 15:50:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:22.942 15:50:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:22.942 15:50:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.942 15:50:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:23.200 15:50:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:23.200 15:50:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:23.200 15:50:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:23.200 15:50:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:23.457 15:50:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:23.457 15:50:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:23.457 15:50:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:23.457 15:50:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:23.715 15:50:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:23.715 15:50:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:23.715 15:50:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:23.715 15:50:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:23.971 15:50:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:23.971 15:50:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:23.971 15:50:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:23.971 15:50:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:24.228 15:50:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:24.228 15:50:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:24.228 15:50:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:24.228 15:50:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:24.484 15:50:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:24.484 15:50:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:24.485 15:50:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:24.741 15:50:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:24.997 15:50:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:26.049 15:50:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:26.049 15:50:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:26.049 15:50:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:26.049 15:50:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:26.306 15:50:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:26.306 15:50:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:26.306 15:50:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:26.306 15:50:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:26.564 15:50:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:26.564 15:50:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:26.564 15:50:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:26.564 15:50:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:26.821 15:50:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:26.821 15:50:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:26.821 15:50:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:26.821 15:50:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:27.078 15:50:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:27.078 15:50:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:27.078 15:50:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.078 15:50:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:27.336 15:50:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:27.336 15:50:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:27.336 15:50:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.336 15:50:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:27.593 15:50:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:27.593 15:50:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:27.593 15:50:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:27.593 15:50:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:27.851 15:50:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:28.108 15:50:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:29.477 15:50:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:29.477 15:50:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:29.477 15:50:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.477 15:50:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:29.477 15:50:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:29.477 15:50:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:29.477 15:50:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.477 15:50:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:29.734 15:50:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:29.734 15:50:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:29.734 15:50:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.734 15:50:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:29.991 15:50:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:29.991 15:50:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:29.991 15:50:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.991 15:50:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:30.249 15:50:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:30.249 15:50:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:30.249 15:50:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:30.249 15:50:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:30.506 15:50:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:30.506 15:50:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:30.506 15:50:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:30.506 15:50:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:30.764 15:50:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:30.764 15:50:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:30.764 15:50:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:31.021 15:50:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:31.280 15:50:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:32.214 15:50:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:32.214 15:50:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:32.214 15:50:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:32.214 15:50:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:32.472 15:50:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:32.472 15:50:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:32.472 15:50:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:32.472 15:50:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:32.729 15:50:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:32.729 15:50:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:32.729 15:50:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:32.729 15:50:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:32.987 15:50:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:32.987 15:50:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:32.987 15:50:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:32.987 15:50:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:33.244 15:50:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:33.244 15:50:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:33.244 15:50:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:33.244 15:50:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:33.502 15:50:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:33.502 15:50:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:33.502 15:50:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:33.502 15:50:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:33.759 15:50:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:33.759 15:50:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:33.759 15:50:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:34.017 15:50:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:34.275 15:50:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:35.207 15:50:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:35.207 15:50:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:35.207 15:50:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:35.207 15:50:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:35.465 15:50:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:35.465 15:50:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:35.465 15:50:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:35.465 15:50:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:35.723 15:50:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:35.723 15:50:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:35.723 15:50:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:35.723 15:50:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:35.981 15:50:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:35.981 15:50:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:35.981 15:50:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:35.981 15:50:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:36.238 15:50:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:36.238 15:50:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:36.238 15:50:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.238 15:50:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:36.496 15:50:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:36.496 15:50:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:36.496 15:50:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.496 15:50:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:36.754 15:50:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:36.754 15:50:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:36.754 15:50:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:37.012 15:50:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:37.268 15:50:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:32:38.200 15:50:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:32:38.200 15:50:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:38.200 15:50:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.200 15:50:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:38.459 15:50:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.459 15:50:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:38.459 15:50:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.459 15:50:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:38.716 15:50:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:38.717 15:50:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:38.717 15:50:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.717 15:50:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:38.975 15:50:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.975 15:50:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:38.975 15:50:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.975 15:50:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:39.233 15:50:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:39.233 15:50:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:39.233 15:50:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:39.233 15:50:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:39.491 15:50:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:39.491 15:50:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:39.491 15:50:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:39.491 15:50:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:39.749 15:50:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:39.749 15:50:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1444530 00:32:39.749 15:50:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 1444530 ']' 00:32:39.749 15:50:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 1444530 00:32:39.749 15:50:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:32:39.749 15:50:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:39.749 15:50:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1444530 00:32:39.749 15:50:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:32:39.749 15:50:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:32:39.749 15:50:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1444530' 00:32:39.749 killing process with pid 1444530 00:32:39.750 15:50:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 1444530 00:32:39.750 15:50:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 1444530 00:32:40.011 Connection closed with partial response: 00:32:40.011 00:32:40.011 00:32:40.011 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1444530 00:32:40.012 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:40.012 [2024-05-15 15:50:18.870501] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:32:40.012 [2024-05-15 15:50:18.870593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1444530 ] 00:32:40.012 EAL: No free 2048 kB hugepages reported on node 1 00:32:40.012 [2024-05-15 15:50:18.907644] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:40.012 [2024-05-15 15:50:18.940964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.012 [2024-05-15 15:50:19.025425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:40.012 Running I/O for 90 seconds... 00:32:40.012 [2024-05-15 15:50:34.692828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.692892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.692970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.692991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.693016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.693033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.693056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.693086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.693109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.693124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.693146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.693162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.693184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.693199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.693244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.693263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.693546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.693567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.693726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.693764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.693808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.693827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.693852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.693869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.693893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.693910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.693934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.693950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.693974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.693990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.694014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.694030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.694054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.694070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.694094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.694110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.694134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.694150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.694174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.694190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.694214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.694240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.694265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.694281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.694304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.694325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.694350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.694366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.694390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.694406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.694429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.694445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.694469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.694486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.694509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.694525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.694549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:36200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.694565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.694589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.694605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.694629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.694645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.694671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:36224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.694688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.694711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:36232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.694742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.694766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:36240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.694781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.694805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.694825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.694849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.012 [2024-05-15 15:50:34.694865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:40.012 [2024-05-15 15:50:34.694888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.694904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.694927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.694943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.694966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:36280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.694982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.695021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.695059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:36304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.695099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.695137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:36320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.695176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:36328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.695240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.695281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:36344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.695321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.695367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.695408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.695448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:36376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.695488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.695543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:36392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.695582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.695620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:36408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.695658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.695696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:36424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.695735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.695774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.695813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:36448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.695857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:36456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.695896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.695935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.695974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.695997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.696013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.696036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.696052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.696075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.696091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.696114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:36504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.696130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.696153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:36512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.696169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.696192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:36520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.696232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.696259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:36528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.696276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.696300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.696316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.696340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.696361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.696386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:36552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.696403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.696649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.696673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.696706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.696724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.696753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.013 [2024-05-15 15:50:34.696770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:40.013 [2024-05-15 15:50:34.696798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:36584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.696815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.696843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:36592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.696860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.696888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.696920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.696948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:36608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.696965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.696992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.697008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.697036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:36624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.697052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.697079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.697095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.697122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:36640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.697146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.697174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:36648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.697191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.697242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.697261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.697289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.697306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.697334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.697351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.697379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.697396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.697424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:36688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.697441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.697468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.697485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.697528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:36704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.697544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.697571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:36712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.697587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.697614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.697630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.697658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.697675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.697702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.697719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.697751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.697768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.697795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.697811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.697838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.697854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.697881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.697897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.697924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:36776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.697940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.697967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.697983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.698010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:36792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.698027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.698053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.698069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.698096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:36808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.698112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.698139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:36816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.698155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.698181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:36824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.698212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.698250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:36832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.698268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.698300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:36840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.698318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.698346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.698362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.698390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.698407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.698435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:36864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.698451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.698479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.014 [2024-05-15 15:50:34.698495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.698524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.014 [2024-05-15 15:50:34.698541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.698583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.014 [2024-05-15 15:50:34.698599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.698626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.014 [2024-05-15 15:50:34.698642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:40.014 [2024-05-15 15:50:34.698669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.014 [2024-05-15 15:50:34.698685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:34.698712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.015 [2024-05-15 15:50:34.698728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:34.698755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:35960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.015 [2024-05-15 15:50:34.698772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:34.698799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.015 [2024-05-15 15:50:34.698815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.242033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.242113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.242175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.242197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.242232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.242261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.242285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.242302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.242327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.242354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.242390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.242411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.242434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:112584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.242451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.242474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.242490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.242524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.242540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.242562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.242579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.242601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.242618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.242642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.242659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.242681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.242703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.242727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.242744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.242767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.242792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.242829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.242850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.242874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.242891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.242914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.242931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.242953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.242969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.242992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.243008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.243030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.243046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.243069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.243085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.243108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.243124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.243147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.243163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.243186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.243202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.243240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.243262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.243300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.243329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.243355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.243372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.243395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.243412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.243435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.243452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.243474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.243491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.243524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.243540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.243564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.243580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.243603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.015 [2024-05-15 15:50:50.243619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.243643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.243659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.243682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.243698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:40.015 [2024-05-15 15:50:50.243721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.015 [2024-05-15 15:50:50.243738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.243765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.016 [2024-05-15 15:50:50.243783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.243806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.016 [2024-05-15 15:50:50.243822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.243846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.016 [2024-05-15 15:50:50.243863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.243886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.016 [2024-05-15 15:50:50.243903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.243925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.016 [2024-05-15 15:50:50.243942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.243965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.016 [2024-05-15 15:50:50.243982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.244004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.016 [2024-05-15 15:50:50.244020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.244044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.016 [2024-05-15 15:50:50.244060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.245367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.016 [2024-05-15 15:50:50.245394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.245424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.016 [2024-05-15 15:50:50.245442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.245466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.016 [2024-05-15 15:50:50.245483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.245506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.016 [2024-05-15 15:50:50.245523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.245552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.016 [2024-05-15 15:50:50.245578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.245615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.016 [2024-05-15 15:50:50.245637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.245660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.016 [2024-05-15 15:50:50.245677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.245699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.016 [2024-05-15 15:50:50.245716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.245740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.016 [2024-05-15 15:50:50.245757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.245779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.016 [2024-05-15 15:50:50.245812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.245844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.016 [2024-05-15 15:50:50.245863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.245897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.016 [2024-05-15 15:50:50.245914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.245937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.016 [2024-05-15 15:50:50.245953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.245976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.016 [2024-05-15 15:50:50.245994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.246017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.016 [2024-05-15 15:50:50.246033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.246056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.016 [2024-05-15 15:50:50.246072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.246095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.016 [2024-05-15 15:50:50.246117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.246140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.016 [2024-05-15 15:50:50.246156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.246179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.016 [2024-05-15 15:50:50.246196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.247024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.016 [2024-05-15 15:50:50.247049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.247094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.016 [2024-05-15 15:50:50.247112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.247135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.016 [2024-05-15 15:50:50.247152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.247175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.016 [2024-05-15 15:50:50.247191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:40.016 [2024-05-15 15:50:50.247213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.016 [2024-05-15 15:50:50.247238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:40.017 [2024-05-15 15:50:50.247262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.017 [2024-05-15 15:50:50.247279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:40.017 [2024-05-15 15:50:50.247302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.017 [2024-05-15 15:50:50.247318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:40.017 [2024-05-15 15:50:50.247341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.017 [2024-05-15 15:50:50.247357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:40.017 [2024-05-15 15:50:50.247380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.017 [2024-05-15 15:50:50.247396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:40.017 [2024-05-15 15:50:50.247419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.017 [2024-05-15 15:50:50.247440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:40.017 [2024-05-15 15:50:50.247464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.017 [2024-05-15 15:50:50.247481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:40.017 [2024-05-15 15:50:50.247505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.017 [2024-05-15 15:50:50.247521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:40.017 [2024-05-15 15:50:50.247543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.017 [2024-05-15 15:50:50.247570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:40.017 [2024-05-15 15:50:50.247593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.017 [2024-05-15 15:50:50.247609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:40.017 [2024-05-15 15:50:50.247632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.017 [2024-05-15 15:50:50.247648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:40.017 [2024-05-15 15:50:50.247685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.017 [2024-05-15 15:50:50.247712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:40.017 Received shutdown signal, test time was about 32.393101 seconds 00:32:40.017 00:32:40.017 Latency(us) 00:32:40.017 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:40.017 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:40.017 Verification LBA range: start 0x0 length 0x4000 00:32:40.017 Nvme0n1 : 32.39 7666.46 29.95 0.00 0.00 16646.80 476.35 4026531.84 00:32:40.017 =================================================================================================================== 00:32:40.017 Total : 7666.46 29.95 0.00 0.00 16646.80 476.35 4026531.84 00:32:40.017 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:40.275 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:40.275 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:40.531 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:40.531 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:40.531 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:32:40.531 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:40.531 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:32:40.531 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:40.532 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:40.532 rmmod nvme_tcp 00:32:40.532 rmmod nvme_fabrics 00:32:40.532 rmmod nvme_keyring 00:32:40.532 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:40.532 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:32:40.532 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:32:40.532 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1444249 ']' 00:32:40.532 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1444249 00:32:40.532 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 1444249 ']' 00:32:40.532 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 1444249 00:32:40.532 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:32:40.532 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:40.532 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1444249 00:32:40.532 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:40.532 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:40.532 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1444249' 00:32:40.532 killing process with pid 1444249 00:32:40.532 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 1444249 00:32:40.532 [2024-05-15 15:50:53.463389] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:32:40.532 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 1444249 00:32:40.791 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:40.791 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:40.791 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:40.791 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:40.791 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:40.791 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.791 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:40.791 15:50:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:42.699 15:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:42.699 00:32:42.699 real 0m41.473s 00:32:42.699 user 2m2.282s 00:32:42.699 sys 0m11.472s 00:32:42.699 15:50:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:42.699 15:50:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:42.699 ************************************ 00:32:42.699 END TEST nvmf_host_multipath_status 00:32:42.699 ************************************ 00:32:42.699 15:50:55 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:42.699 15:50:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:42.699 15:50:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:42.699 15:50:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:42.960 ************************************ 00:32:42.960 START TEST nvmf_discovery_remove_ifc 00:32:42.960 ************************************ 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:42.960 * Looking for test storage... 00:32:42.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:32:42.960 15:50:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:45.490 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:45.490 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:32:45.490 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:45.490 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:45.490 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:45.490 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:45.490 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:45.490 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:32:45.490 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:45.490 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:32:45.490 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:32:45.490 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:32:45.490 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:32:45.490 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:32:45.490 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:32:45.490 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:45.490 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:45.490 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:45.490 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:45.490 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:45.490 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:45.490 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:45.490 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:45.491 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:45.491 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:45.491 Found net devices under 0000:09:00.0: cvl_0_0 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:45.491 Found net devices under 0000:09:00.1: cvl_0_1 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:45.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:45.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:32:45.491 00:32:45.491 --- 10.0.0.2 ping statistics --- 00:32:45.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.491 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:45.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:45.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:32:45.491 00:32:45.491 --- 10.0.0.1 ping statistics --- 00:32:45.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.491 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1451015 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1451015 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 1451015 ']' 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:45.491 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:45.491 [2024-05-15 15:50:58.459644] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:32:45.491 [2024-05-15 15:50:58.459741] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:45.491 EAL: No free 2048 kB hugepages reported on node 1 00:32:45.491 [2024-05-15 15:50:58.503326] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:45.491 [2024-05-15 15:50:58.533968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.750 [2024-05-15 15:50:58.619921] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:45.750 [2024-05-15 15:50:58.619995] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:45.750 [2024-05-15 15:50:58.620009] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:45.750 [2024-05-15 15:50:58.620020] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:45.750 [2024-05-15 15:50:58.620029] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:45.750 [2024-05-15 15:50:58.620070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.750 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:45.750 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:45.750 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:45.750 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:45.750 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:45.750 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:45.750 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:45.750 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.750 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:45.750 [2024-05-15 15:50:58.773825] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:45.750 [2024-05-15 15:50:58.781785] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:32:45.750 [2024-05-15 15:50:58.782075] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:45.750 null0 00:32:45.750 [2024-05-15 15:50:58.813968] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:45.750 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.750 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1451034 00:32:45.750 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:45.750 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1451034 /tmp/host.sock 00:32:45.750 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 1451034 ']' 00:32:45.750 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:32:45.750 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:45.750 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:45.750 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:45.750 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:45.750 15:50:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:46.008 [2024-05-15 15:50:58.878367] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:32:46.008 [2024-05-15 15:50:58.878448] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1451034 ] 00:32:46.008 EAL: No free 2048 kB hugepages reported on node 1 00:32:46.008 [2024-05-15 15:50:58.919778] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:46.008 [2024-05-15 15:50:58.955774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.008 [2024-05-15 15:50:59.048946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.008 15:50:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:46.008 15:50:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:46.008 15:50:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:46.008 15:50:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:46.008 15:50:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.008 15:50:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:46.008 15:50:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.008 15:50:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:46.008 15:50:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.008 15:50:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:46.267 15:50:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.267 15:50:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:46.267 15:50:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.267 15:50:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:47.207 [2024-05-15 15:51:00.262070] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:47.207 [2024-05-15 15:51:00.262107] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:47.207 [2024-05-15 15:51:00.262132] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:47.475 [2024-05-15 15:51:00.390588] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:47.791 [2024-05-15 15:51:00.613711] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:47.791 [2024-05-15 15:51:00.613778] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:47.791 [2024-05-15 15:51:00.613822] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:47.791 [2024-05-15 15:51:00.613847] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:47.791 [2024-05-15 15:51:00.613884] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:47.791 15:51:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.791 15:51:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:47.791 15:51:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:47.791 15:51:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:47.791 15:51:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.791 15:51:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:47.791 15:51:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:47.791 15:51:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:47.791 15:51:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:47.791 [2024-05-15 15:51:00.620376] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x22def70 was disconnected and freed. delete nvme_qpair. 00:32:47.791 15:51:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.791 15:51:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:47.791 15:51:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:47.791 15:51:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:47.791 15:51:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:47.791 15:51:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:47.791 15:51:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:47.791 15:51:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.791 15:51:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:47.791 15:51:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:47.791 15:51:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:47.791 15:51:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:47.791 15:51:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.791 15:51:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:47.791 15:51:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:48.725 15:51:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:48.725 15:51:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:48.725 15:51:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:48.725 15:51:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.725 15:51:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:48.725 15:51:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:48.725 15:51:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:48.725 15:51:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.725 15:51:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:48.725 15:51:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:50.100 15:51:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:50.100 15:51:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:50.100 15:51:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.100 15:51:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:50.100 15:51:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:50.100 15:51:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:50.100 15:51:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:50.100 15:51:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.100 15:51:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:50.100 15:51:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:51.033 15:51:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:51.033 15:51:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:51.033 15:51:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.033 15:51:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:51.033 15:51:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:51.033 15:51:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:51.033 15:51:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:51.033 15:51:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.033 15:51:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:51.033 15:51:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:51.967 15:51:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:51.967 15:51:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:51.967 15:51:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:51.967 15:51:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.967 15:51:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:51.967 15:51:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:51.967 15:51:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:51.967 15:51:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.967 15:51:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:51.967 15:51:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:52.901 15:51:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:52.901 15:51:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:52.901 15:51:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.901 15:51:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:52.901 15:51:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:52.901 15:51:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:52.901 15:51:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:52.901 15:51:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.901 15:51:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:52.901 15:51:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:53.159 [2024-05-15 15:51:06.054977] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:53.159 [2024-05-15 15:51:06.055045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:53.159 [2024-05-15 15:51:06.055069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.159 [2024-05-15 15:51:06.055090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:53.159 [2024-05-15 15:51:06.055106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.159 [2024-05-15 15:51:06.055122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:53.159 [2024-05-15 15:51:06.055137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.159 [2024-05-15 15:51:06.055154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:53.159 [2024-05-15 15:51:06.055176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.159 [2024-05-15 15:51:06.055193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:53.159 [2024-05-15 15:51:06.055211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.159 [2024-05-15 15:51:06.055235] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a60e0 is same with the state(5) to be set 00:32:53.159 [2024-05-15 15:51:06.064997] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a60e0 (9): Bad file descriptor 00:32:53.159 [2024-05-15 15:51:06.075044] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:54.092 15:51:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:54.092 15:51:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:54.092 15:51:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:54.092 15:51:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.092 15:51:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:54.092 15:51:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:54.092 15:51:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:54.092 [2024-05-15 15:51:07.105245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:55.466 [2024-05-15 15:51:08.129281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:55.466 [2024-05-15 15:51:08.129350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22a60e0 with addr=10.0.0.2, port=4420 00:32:55.466 [2024-05-15 15:51:08.129376] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a60e0 is same with the state(5) to be set 00:32:55.466 [2024-05-15 15:51:08.129876] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a60e0 (9): Bad file descriptor 00:32:55.466 [2024-05-15 15:51:08.129930] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.466 [2024-05-15 15:51:08.129974] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:55.466 [2024-05-15 15:51:08.130017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:55.466 [2024-05-15 15:51:08.130047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.466 [2024-05-15 15:51:08.130069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:55.466 [2024-05-15 15:51:08.130085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.466 [2024-05-15 15:51:08.130101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:55.466 [2024-05-15 15:51:08.130123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.466 [2024-05-15 15:51:08.130154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:55.466 [2024-05-15 15:51:08.130181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.466 [2024-05-15 15:51:08.130240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:55.466 [2024-05-15 15:51:08.130286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.466 [2024-05-15 15:51:08.130320] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:55.467 [2024-05-15 15:51:08.130430] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a5570 (9): Bad file descriptor 00:32:55.467 [2024-05-15 15:51:08.131457] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:55.467 [2024-05-15 15:51:08.131481] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:55.467 15:51:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.467 15:51:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:55.467 15:51:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:56.401 15:51:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:56.401 15:51:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:56.401 15:51:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:56.401 15:51:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:56.401 15:51:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.401 15:51:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:56.402 15:51:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:56.402 15:51:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.402 15:51:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:56.402 15:51:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:56.402 15:51:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:56.402 15:51:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:56.402 15:51:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:56.402 15:51:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:56.402 15:51:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:56.402 15:51:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.402 15:51:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:56.402 15:51:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:56.402 15:51:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:56.402 15:51:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.402 15:51:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:56.402 15:51:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:57.337 [2024-05-15 15:51:10.188114] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:57.337 [2024-05-15 15:51:10.188148] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:57.337 [2024-05-15 15:51:10.188173] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:57.337 15:51:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:57.337 15:51:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:57.337 15:51:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:57.337 15:51:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:57.337 15:51:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.337 15:51:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:57.337 15:51:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:57.337 15:51:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.337 15:51:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:57.337 15:51:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:57.337 [2024-05-15 15:51:10.315644] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:57.337 [2024-05-15 15:51:10.418618] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:57.337 [2024-05-15 15:51:10.418671] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:57.337 [2024-05-15 15:51:10.418708] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:57.337 [2024-05-15 15:51:10.418733] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:57.337 [2024-05-15 15:51:10.418747] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:57.337 [2024-05-15 15:51:10.426345] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x22e9b80 was disconnected and freed. delete nvme_qpair. 00:32:58.271 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:58.271 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:58.271 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:58.271 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.271 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:58.271 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:58.271 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:58.271 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.271 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:58.271 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:58.271 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1451034 00:32:58.271 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 1451034 ']' 00:32:58.271 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 1451034 00:32:58.271 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:58.271 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:58.271 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1451034 00:32:58.530 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:58.530 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:58.530 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1451034' 00:32:58.530 killing process with pid 1451034 00:32:58.530 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 1451034 00:32:58.530 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 1451034 00:32:58.530 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:58.530 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:58.530 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:58.530 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:58.530 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:58.530 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:58.530 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:58.530 rmmod nvme_tcp 00:32:58.530 rmmod nvme_fabrics 00:32:58.788 rmmod nvme_keyring 00:32:58.788 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:58.788 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:58.788 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:58.788 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1451015 ']' 00:32:58.788 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1451015 00:32:58.788 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 1451015 ']' 00:32:58.788 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 1451015 00:32:58.788 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:58.788 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:58.788 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1451015 00:32:58.788 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:58.788 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:58.788 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1451015' 00:32:58.788 killing process with pid 1451015 00:32:58.788 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 1451015 00:32:58.788 [2024-05-15 15:51:11.681317] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:32:58.788 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 1451015 00:32:59.046 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:59.046 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:59.046 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:59.046 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:59.046 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:59.046 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.046 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:59.046 15:51:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:00.947 15:51:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:00.947 00:33:00.947 real 0m18.136s 00:33:00.947 user 0m24.868s 00:33:00.947 sys 0m3.263s 00:33:00.947 15:51:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:00.947 15:51:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:00.947 ************************************ 00:33:00.947 END TEST nvmf_discovery_remove_ifc 00:33:00.947 ************************************ 00:33:00.947 15:51:13 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:00.947 15:51:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:00.947 15:51:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:00.947 15:51:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:00.947 ************************************ 00:33:00.947 START TEST nvmf_identify_kernel_target 00:33:00.947 ************************************ 00:33:00.947 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:01.206 * Looking for test storage... 00:33:01.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:01.206 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:33:01.207 15:51:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:03.735 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:03.735 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:03.735 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:03.736 Found net devices under 0000:09:00.0: cvl_0_0 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:03.736 Found net devices under 0000:09:00.1: cvl_0_1 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:03.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:03.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:33:03.736 00:33:03.736 --- 10.0.0.2 ping statistics --- 00:33:03.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:03.736 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:03.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:03.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:33:03.736 00:33:03.736 --- 10.0.0.1 ping statistics --- 00:33:03.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:03.736 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:03.736 15:51:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:05.169 Waiting for block devices as requested 00:33:05.169 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:05.169 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:05.169 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:05.169 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:05.429 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:05.429 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:05.429 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:05.429 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:05.687 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:33:05.687 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:05.687 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:05.687 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:05.946 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:05.946 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:05.946 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:05.946 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:06.205 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:06.205 No valid GPT data, bailing 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:33:06.205 00:33:06.205 Discovery Log Number of Records 2, Generation counter 2 00:33:06.205 =====Discovery Log Entry 0====== 00:33:06.205 trtype: tcp 00:33:06.205 adrfam: ipv4 00:33:06.205 subtype: current discovery subsystem 00:33:06.205 treq: not specified, sq flow control disable supported 00:33:06.205 portid: 1 00:33:06.205 trsvcid: 4420 00:33:06.205 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:06.205 traddr: 10.0.0.1 00:33:06.205 eflags: none 00:33:06.205 sectype: none 00:33:06.205 =====Discovery Log Entry 1====== 00:33:06.205 trtype: tcp 00:33:06.205 adrfam: ipv4 00:33:06.205 subtype: nvme subsystem 00:33:06.205 treq: not specified, sq flow control disable supported 00:33:06.205 portid: 1 00:33:06.205 trsvcid: 4420 00:33:06.205 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:06.205 traddr: 10.0.0.1 00:33:06.205 eflags: none 00:33:06.205 sectype: none 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:06.205 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:06.205 EAL: No free 2048 kB hugepages reported on node 1 00:33:06.205 ===================================================== 00:33:06.205 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:06.205 ===================================================== 00:33:06.205 Controller Capabilities/Features 00:33:06.205 ================================ 00:33:06.205 Vendor ID: 0000 00:33:06.205 Subsystem Vendor ID: 0000 00:33:06.205 Serial Number: 3a641a13c08665b75103 00:33:06.205 Model Number: Linux 00:33:06.205 Firmware Version: 6.7.0-68 00:33:06.205 Recommended Arb Burst: 0 00:33:06.205 IEEE OUI Identifier: 00 00 00 00:33:06.205 Multi-path I/O 00:33:06.205 May have multiple subsystem ports: No 00:33:06.205 May have multiple controllers: No 00:33:06.205 Associated with SR-IOV VF: No 00:33:06.205 Max Data Transfer Size: Unlimited 00:33:06.205 Max Number of Namespaces: 0 00:33:06.205 Max Number of I/O Queues: 1024 00:33:06.205 NVMe Specification Version (VS): 1.3 00:33:06.205 NVMe Specification Version (Identify): 1.3 00:33:06.205 Maximum Queue Entries: 1024 00:33:06.205 Contiguous Queues Required: No 00:33:06.205 Arbitration Mechanisms Supported 00:33:06.205 Weighted Round Robin: Not Supported 00:33:06.205 Vendor Specific: Not Supported 00:33:06.205 Reset Timeout: 7500 ms 00:33:06.205 Doorbell Stride: 4 bytes 00:33:06.205 NVM Subsystem Reset: Not Supported 00:33:06.205 Command Sets Supported 00:33:06.205 NVM Command Set: Supported 00:33:06.205 Boot Partition: Not Supported 00:33:06.205 Memory Page Size Minimum: 4096 bytes 00:33:06.205 Memory Page Size Maximum: 4096 bytes 00:33:06.205 Persistent Memory Region: Not Supported 00:33:06.205 Optional Asynchronous Events Supported 00:33:06.205 Namespace Attribute Notices: Not Supported 00:33:06.205 Firmware Activation Notices: Not Supported 00:33:06.205 ANA Change Notices: Not Supported 00:33:06.205 PLE Aggregate Log Change Notices: Not Supported 00:33:06.205 LBA Status Info Alert Notices: Not Supported 00:33:06.205 EGE Aggregate Log Change Notices: Not Supported 00:33:06.205 Normal NVM Subsystem Shutdown event: Not Supported 00:33:06.205 Zone Descriptor Change Notices: Not Supported 00:33:06.205 Discovery Log Change Notices: Supported 00:33:06.205 Controller Attributes 00:33:06.205 128-bit Host Identifier: Not Supported 00:33:06.205 Non-Operational Permissive Mode: Not Supported 00:33:06.205 NVM Sets: Not Supported 00:33:06.205 Read Recovery Levels: Not Supported 00:33:06.205 Endurance Groups: Not Supported 00:33:06.205 Predictable Latency Mode: Not Supported 00:33:06.205 Traffic Based Keep ALive: Not Supported 00:33:06.205 Namespace Granularity: Not Supported 00:33:06.205 SQ Associations: Not Supported 00:33:06.205 UUID List: Not Supported 00:33:06.205 Multi-Domain Subsystem: Not Supported 00:33:06.205 Fixed Capacity Management: Not Supported 00:33:06.205 Variable Capacity Management: Not Supported 00:33:06.205 Delete Endurance Group: Not Supported 00:33:06.205 Delete NVM Set: Not Supported 00:33:06.205 Extended LBA Formats Supported: Not Supported 00:33:06.205 Flexible Data Placement Supported: Not Supported 00:33:06.205 00:33:06.205 Controller Memory Buffer Support 00:33:06.205 ================================ 00:33:06.205 Supported: No 00:33:06.205 00:33:06.205 Persistent Memory Region Support 00:33:06.205 ================================ 00:33:06.205 Supported: No 00:33:06.205 00:33:06.205 Admin Command Set Attributes 00:33:06.205 ============================ 00:33:06.205 Security Send/Receive: Not Supported 00:33:06.205 Format NVM: Not Supported 00:33:06.205 Firmware Activate/Download: Not Supported 00:33:06.205 Namespace Management: Not Supported 00:33:06.205 Device Self-Test: Not Supported 00:33:06.205 Directives: Not Supported 00:33:06.205 NVMe-MI: Not Supported 00:33:06.205 Virtualization Management: Not Supported 00:33:06.205 Doorbell Buffer Config: Not Supported 00:33:06.205 Get LBA Status Capability: Not Supported 00:33:06.205 Command & Feature Lockdown Capability: Not Supported 00:33:06.205 Abort Command Limit: 1 00:33:06.205 Async Event Request Limit: 1 00:33:06.205 Number of Firmware Slots: N/A 00:33:06.205 Firmware Slot 1 Read-Only: N/A 00:33:06.205 Firmware Activation Without Reset: N/A 00:33:06.205 Multiple Update Detection Support: N/A 00:33:06.205 Firmware Update Granularity: No Information Provided 00:33:06.205 Per-Namespace SMART Log: No 00:33:06.205 Asymmetric Namespace Access Log Page: Not Supported 00:33:06.205 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:06.205 Command Effects Log Page: Not Supported 00:33:06.205 Get Log Page Extended Data: Supported 00:33:06.205 Telemetry Log Pages: Not Supported 00:33:06.205 Persistent Event Log Pages: Not Supported 00:33:06.205 Supported Log Pages Log Page: May Support 00:33:06.205 Commands Supported & Effects Log Page: Not Supported 00:33:06.205 Feature Identifiers & Effects Log Page:May Support 00:33:06.205 NVMe-MI Commands & Effects Log Page: May Support 00:33:06.205 Data Area 4 for Telemetry Log: Not Supported 00:33:06.205 Error Log Page Entries Supported: 1 00:33:06.205 Keep Alive: Not Supported 00:33:06.205 00:33:06.205 NVM Command Set Attributes 00:33:06.205 ========================== 00:33:06.205 Submission Queue Entry Size 00:33:06.205 Max: 1 00:33:06.205 Min: 1 00:33:06.205 Completion Queue Entry Size 00:33:06.205 Max: 1 00:33:06.205 Min: 1 00:33:06.205 Number of Namespaces: 0 00:33:06.205 Compare Command: Not Supported 00:33:06.205 Write Uncorrectable Command: Not Supported 00:33:06.205 Dataset Management Command: Not Supported 00:33:06.205 Write Zeroes Command: Not Supported 00:33:06.205 Set Features Save Field: Not Supported 00:33:06.205 Reservations: Not Supported 00:33:06.205 Timestamp: Not Supported 00:33:06.205 Copy: Not Supported 00:33:06.205 Volatile Write Cache: Not Present 00:33:06.205 Atomic Write Unit (Normal): 1 00:33:06.205 Atomic Write Unit (PFail): 1 00:33:06.205 Atomic Compare & Write Unit: 1 00:33:06.205 Fused Compare & Write: Not Supported 00:33:06.205 Scatter-Gather List 00:33:06.205 SGL Command Set: Supported 00:33:06.205 SGL Keyed: Not Supported 00:33:06.205 SGL Bit Bucket Descriptor: Not Supported 00:33:06.205 SGL Metadata Pointer: Not Supported 00:33:06.205 Oversized SGL: Not Supported 00:33:06.205 SGL Metadata Address: Not Supported 00:33:06.205 SGL Offset: Supported 00:33:06.205 Transport SGL Data Block: Not Supported 00:33:06.205 Replay Protected Memory Block: Not Supported 00:33:06.205 00:33:06.205 Firmware Slot Information 00:33:06.205 ========================= 00:33:06.205 Active slot: 0 00:33:06.205 00:33:06.205 00:33:06.205 Error Log 00:33:06.205 ========= 00:33:06.205 00:33:06.205 Active Namespaces 00:33:06.205 ================= 00:33:06.205 Discovery Log Page 00:33:06.205 ================== 00:33:06.205 Generation Counter: 2 00:33:06.205 Number of Records: 2 00:33:06.205 Record Format: 0 00:33:06.205 00:33:06.205 Discovery Log Entry 0 00:33:06.205 ---------------------- 00:33:06.205 Transport Type: 3 (TCP) 00:33:06.205 Address Family: 1 (IPv4) 00:33:06.205 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:06.205 Entry Flags: 00:33:06.205 Duplicate Returned Information: 0 00:33:06.205 Explicit Persistent Connection Support for Discovery: 0 00:33:06.205 Transport Requirements: 00:33:06.205 Secure Channel: Not Specified 00:33:06.205 Port ID: 1 (0x0001) 00:33:06.205 Controller ID: 65535 (0xffff) 00:33:06.205 Admin Max SQ Size: 32 00:33:06.205 Transport Service Identifier: 4420 00:33:06.205 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:06.205 Transport Address: 10.0.0.1 00:33:06.205 Discovery Log Entry 1 00:33:06.205 ---------------------- 00:33:06.205 Transport Type: 3 (TCP) 00:33:06.205 Address Family: 1 (IPv4) 00:33:06.205 Subsystem Type: 2 (NVM Subsystem) 00:33:06.205 Entry Flags: 00:33:06.205 Duplicate Returned Information: 0 00:33:06.205 Explicit Persistent Connection Support for Discovery: 0 00:33:06.205 Transport Requirements: 00:33:06.205 Secure Channel: Not Specified 00:33:06.205 Port ID: 1 (0x0001) 00:33:06.205 Controller ID: 65535 (0xffff) 00:33:06.205 Admin Max SQ Size: 32 00:33:06.205 Transport Service Identifier: 4420 00:33:06.205 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:06.205 Transport Address: 10.0.0.1 00:33:06.205 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:06.464 EAL: No free 2048 kB hugepages reported on node 1 00:33:06.464 get_feature(0x01) failed 00:33:06.464 get_feature(0x02) failed 00:33:06.464 get_feature(0x04) failed 00:33:06.464 ===================================================== 00:33:06.464 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:06.464 ===================================================== 00:33:06.464 Controller Capabilities/Features 00:33:06.464 ================================ 00:33:06.464 Vendor ID: 0000 00:33:06.464 Subsystem Vendor ID: 0000 00:33:06.464 Serial Number: 8c57a58a5c287b24d351 00:33:06.464 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:06.464 Firmware Version: 6.7.0-68 00:33:06.464 Recommended Arb Burst: 6 00:33:06.464 IEEE OUI Identifier: 00 00 00 00:33:06.464 Multi-path I/O 00:33:06.464 May have multiple subsystem ports: Yes 00:33:06.464 May have multiple controllers: Yes 00:33:06.464 Associated with SR-IOV VF: No 00:33:06.464 Max Data Transfer Size: Unlimited 00:33:06.464 Max Number of Namespaces: 1024 00:33:06.464 Max Number of I/O Queues: 128 00:33:06.464 NVMe Specification Version (VS): 1.3 00:33:06.464 NVMe Specification Version (Identify): 1.3 00:33:06.464 Maximum Queue Entries: 1024 00:33:06.464 Contiguous Queues Required: No 00:33:06.464 Arbitration Mechanisms Supported 00:33:06.464 Weighted Round Robin: Not Supported 00:33:06.464 Vendor Specific: Not Supported 00:33:06.464 Reset Timeout: 7500 ms 00:33:06.464 Doorbell Stride: 4 bytes 00:33:06.464 NVM Subsystem Reset: Not Supported 00:33:06.464 Command Sets Supported 00:33:06.464 NVM Command Set: Supported 00:33:06.464 Boot Partition: Not Supported 00:33:06.464 Memory Page Size Minimum: 4096 bytes 00:33:06.464 Memory Page Size Maximum: 4096 bytes 00:33:06.464 Persistent Memory Region: Not Supported 00:33:06.464 Optional Asynchronous Events Supported 00:33:06.464 Namespace Attribute Notices: Supported 00:33:06.464 Firmware Activation Notices: Not Supported 00:33:06.464 ANA Change Notices: Supported 00:33:06.464 PLE Aggregate Log Change Notices: Not Supported 00:33:06.464 LBA Status Info Alert Notices: Not Supported 00:33:06.464 EGE Aggregate Log Change Notices: Not Supported 00:33:06.464 Normal NVM Subsystem Shutdown event: Not Supported 00:33:06.464 Zone Descriptor Change Notices: Not Supported 00:33:06.464 Discovery Log Change Notices: Not Supported 00:33:06.464 Controller Attributes 00:33:06.464 128-bit Host Identifier: Supported 00:33:06.464 Non-Operational Permissive Mode: Not Supported 00:33:06.464 NVM Sets: Not Supported 00:33:06.464 Read Recovery Levels: Not Supported 00:33:06.464 Endurance Groups: Not Supported 00:33:06.464 Predictable Latency Mode: Not Supported 00:33:06.464 Traffic Based Keep ALive: Supported 00:33:06.464 Namespace Granularity: Not Supported 00:33:06.464 SQ Associations: Not Supported 00:33:06.464 UUID List: Not Supported 00:33:06.464 Multi-Domain Subsystem: Not Supported 00:33:06.464 Fixed Capacity Management: Not Supported 00:33:06.464 Variable Capacity Management: Not Supported 00:33:06.464 Delete Endurance Group: Not Supported 00:33:06.464 Delete NVM Set: Not Supported 00:33:06.464 Extended LBA Formats Supported: Not Supported 00:33:06.464 Flexible Data Placement Supported: Not Supported 00:33:06.464 00:33:06.464 Controller Memory Buffer Support 00:33:06.464 ================================ 00:33:06.464 Supported: No 00:33:06.464 00:33:06.464 Persistent Memory Region Support 00:33:06.464 ================================ 00:33:06.464 Supported: No 00:33:06.464 00:33:06.464 Admin Command Set Attributes 00:33:06.464 ============================ 00:33:06.464 Security Send/Receive: Not Supported 00:33:06.464 Format NVM: Not Supported 00:33:06.464 Firmware Activate/Download: Not Supported 00:33:06.464 Namespace Management: Not Supported 00:33:06.464 Device Self-Test: Not Supported 00:33:06.464 Directives: Not Supported 00:33:06.464 NVMe-MI: Not Supported 00:33:06.464 Virtualization Management: Not Supported 00:33:06.464 Doorbell Buffer Config: Not Supported 00:33:06.464 Get LBA Status Capability: Not Supported 00:33:06.464 Command & Feature Lockdown Capability: Not Supported 00:33:06.464 Abort Command Limit: 4 00:33:06.464 Async Event Request Limit: 4 00:33:06.464 Number of Firmware Slots: N/A 00:33:06.464 Firmware Slot 1 Read-Only: N/A 00:33:06.464 Firmware Activation Without Reset: N/A 00:33:06.464 Multiple Update Detection Support: N/A 00:33:06.464 Firmware Update Granularity: No Information Provided 00:33:06.464 Per-Namespace SMART Log: Yes 00:33:06.464 Asymmetric Namespace Access Log Page: Supported 00:33:06.464 ANA Transition Time : 10 sec 00:33:06.464 00:33:06.464 Asymmetric Namespace Access Capabilities 00:33:06.464 ANA Optimized State : Supported 00:33:06.464 ANA Non-Optimized State : Supported 00:33:06.464 ANA Inaccessible State : Supported 00:33:06.464 ANA Persistent Loss State : Supported 00:33:06.464 ANA Change State : Supported 00:33:06.464 ANAGRPID is not changed : No 00:33:06.464 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:06.464 00:33:06.464 ANA Group Identifier Maximum : 128 00:33:06.465 Number of ANA Group Identifiers : 128 00:33:06.465 Max Number of Allowed Namespaces : 1024 00:33:06.465 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:06.465 Command Effects Log Page: Supported 00:33:06.465 Get Log Page Extended Data: Supported 00:33:06.465 Telemetry Log Pages: Not Supported 00:33:06.465 Persistent Event Log Pages: Not Supported 00:33:06.465 Supported Log Pages Log Page: May Support 00:33:06.465 Commands Supported & Effects Log Page: Not Supported 00:33:06.465 Feature Identifiers & Effects Log Page:May Support 00:33:06.465 NVMe-MI Commands & Effects Log Page: May Support 00:33:06.465 Data Area 4 for Telemetry Log: Not Supported 00:33:06.465 Error Log Page Entries Supported: 128 00:33:06.465 Keep Alive: Supported 00:33:06.465 Keep Alive Granularity: 1000 ms 00:33:06.465 00:33:06.465 NVM Command Set Attributes 00:33:06.465 ========================== 00:33:06.465 Submission Queue Entry Size 00:33:06.465 Max: 64 00:33:06.465 Min: 64 00:33:06.465 Completion Queue Entry Size 00:33:06.465 Max: 16 00:33:06.465 Min: 16 00:33:06.465 Number of Namespaces: 1024 00:33:06.465 Compare Command: Not Supported 00:33:06.465 Write Uncorrectable Command: Not Supported 00:33:06.465 Dataset Management Command: Supported 00:33:06.465 Write Zeroes Command: Supported 00:33:06.465 Set Features Save Field: Not Supported 00:33:06.465 Reservations: Not Supported 00:33:06.465 Timestamp: Not Supported 00:33:06.465 Copy: Not Supported 00:33:06.465 Volatile Write Cache: Present 00:33:06.465 Atomic Write Unit (Normal): 1 00:33:06.465 Atomic Write Unit (PFail): 1 00:33:06.465 Atomic Compare & Write Unit: 1 00:33:06.465 Fused Compare & Write: Not Supported 00:33:06.465 Scatter-Gather List 00:33:06.465 SGL Command Set: Supported 00:33:06.465 SGL Keyed: Not Supported 00:33:06.465 SGL Bit Bucket Descriptor: Not Supported 00:33:06.465 SGL Metadata Pointer: Not Supported 00:33:06.465 Oversized SGL: Not Supported 00:33:06.465 SGL Metadata Address: Not Supported 00:33:06.465 SGL Offset: Supported 00:33:06.465 Transport SGL Data Block: Not Supported 00:33:06.465 Replay Protected Memory Block: Not Supported 00:33:06.465 00:33:06.465 Firmware Slot Information 00:33:06.465 ========================= 00:33:06.465 Active slot: 0 00:33:06.465 00:33:06.465 Asymmetric Namespace Access 00:33:06.465 =========================== 00:33:06.465 Change Count : 0 00:33:06.465 Number of ANA Group Descriptors : 1 00:33:06.465 ANA Group Descriptor : 0 00:33:06.465 ANA Group ID : 1 00:33:06.465 Number of NSID Values : 1 00:33:06.465 Change Count : 0 00:33:06.465 ANA State : 1 00:33:06.465 Namespace Identifier : 1 00:33:06.465 00:33:06.465 Commands Supported and Effects 00:33:06.465 ============================== 00:33:06.465 Admin Commands 00:33:06.465 -------------- 00:33:06.465 Get Log Page (02h): Supported 00:33:06.465 Identify (06h): Supported 00:33:06.465 Abort (08h): Supported 00:33:06.465 Set Features (09h): Supported 00:33:06.465 Get Features (0Ah): Supported 00:33:06.465 Asynchronous Event Request (0Ch): Supported 00:33:06.465 Keep Alive (18h): Supported 00:33:06.465 I/O Commands 00:33:06.465 ------------ 00:33:06.465 Flush (00h): Supported 00:33:06.465 Write (01h): Supported LBA-Change 00:33:06.465 Read (02h): Supported 00:33:06.465 Write Zeroes (08h): Supported LBA-Change 00:33:06.465 Dataset Management (09h): Supported 00:33:06.465 00:33:06.465 Error Log 00:33:06.465 ========= 00:33:06.465 Entry: 0 00:33:06.465 Error Count: 0x3 00:33:06.465 Submission Queue Id: 0x0 00:33:06.465 Command Id: 0x5 00:33:06.465 Phase Bit: 0 00:33:06.465 Status Code: 0x2 00:33:06.465 Status Code Type: 0x0 00:33:06.465 Do Not Retry: 1 00:33:06.465 Error Location: 0x28 00:33:06.465 LBA: 0x0 00:33:06.465 Namespace: 0x0 00:33:06.465 Vendor Log Page: 0x0 00:33:06.465 ----------- 00:33:06.465 Entry: 1 00:33:06.465 Error Count: 0x2 00:33:06.465 Submission Queue Id: 0x0 00:33:06.465 Command Id: 0x5 00:33:06.465 Phase Bit: 0 00:33:06.465 Status Code: 0x2 00:33:06.465 Status Code Type: 0x0 00:33:06.465 Do Not Retry: 1 00:33:06.465 Error Location: 0x28 00:33:06.465 LBA: 0x0 00:33:06.465 Namespace: 0x0 00:33:06.465 Vendor Log Page: 0x0 00:33:06.465 ----------- 00:33:06.465 Entry: 2 00:33:06.465 Error Count: 0x1 00:33:06.465 Submission Queue Id: 0x0 00:33:06.465 Command Id: 0x4 00:33:06.465 Phase Bit: 0 00:33:06.465 Status Code: 0x2 00:33:06.465 Status Code Type: 0x0 00:33:06.465 Do Not Retry: 1 00:33:06.465 Error Location: 0x28 00:33:06.465 LBA: 0x0 00:33:06.465 Namespace: 0x0 00:33:06.465 Vendor Log Page: 0x0 00:33:06.465 00:33:06.465 Number of Queues 00:33:06.465 ================ 00:33:06.465 Number of I/O Submission Queues: 128 00:33:06.465 Number of I/O Completion Queues: 128 00:33:06.465 00:33:06.465 ZNS Specific Controller Data 00:33:06.465 ============================ 00:33:06.465 Zone Append Size Limit: 0 00:33:06.465 00:33:06.465 00:33:06.465 Active Namespaces 00:33:06.465 ================= 00:33:06.465 get_feature(0x05) failed 00:33:06.465 Namespace ID:1 00:33:06.465 Command Set Identifier: NVM (00h) 00:33:06.465 Deallocate: Supported 00:33:06.465 Deallocated/Unwritten Error: Not Supported 00:33:06.465 Deallocated Read Value: Unknown 00:33:06.465 Deallocate in Write Zeroes: Not Supported 00:33:06.465 Deallocated Guard Field: 0xFFFF 00:33:06.465 Flush: Supported 00:33:06.465 Reservation: Not Supported 00:33:06.465 Namespace Sharing Capabilities: Multiple Controllers 00:33:06.465 Size (in LBAs): 1953525168 (931GiB) 00:33:06.465 Capacity (in LBAs): 1953525168 (931GiB) 00:33:06.465 Utilization (in LBAs): 1953525168 (931GiB) 00:33:06.465 UUID: d3b71fa7-27ee-4173-b8f9-49b430b4e3b2 00:33:06.465 Thin Provisioning: Not Supported 00:33:06.465 Per-NS Atomic Units: Yes 00:33:06.465 Atomic Boundary Size (Normal): 0 00:33:06.465 Atomic Boundary Size (PFail): 0 00:33:06.465 Atomic Boundary Offset: 0 00:33:06.465 NGUID/EUI64 Never Reused: No 00:33:06.465 ANA group ID: 1 00:33:06.465 Namespace Write Protected: No 00:33:06.465 Number of LBA Formats: 1 00:33:06.465 Current LBA Format: LBA Format #00 00:33:06.465 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:06.465 00:33:06.465 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:06.465 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:06.465 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:33:06.465 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:06.465 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:33:06.465 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:06.465 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:06.465 rmmod nvme_tcp 00:33:06.465 rmmod nvme_fabrics 00:33:06.465 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:06.465 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:33:06.465 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:33:06.465 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:33:06.465 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:06.465 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:06.465 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:06.465 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:06.465 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:06.465 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.465 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:06.465 15:51:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.366 15:51:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:08.366 15:51:21 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:08.366 15:51:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:08.366 15:51:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:33:08.366 15:51:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:08.366 15:51:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:08.366 15:51:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:08.366 15:51:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:08.366 15:51:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:08.366 15:51:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:08.623 15:51:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:09.998 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:09.998 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:09.998 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:09.998 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:09.998 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:09.998 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:09.998 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:09.998 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:09.998 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:09.998 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:09.998 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:09.998 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:09.998 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:09.998 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:09.998 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:09.998 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:10.932 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:33:10.932 00:33:10.932 real 0m9.943s 00:33:10.932 user 0m2.266s 00:33:10.932 sys 0m3.888s 00:33:10.932 15:51:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:10.932 15:51:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:10.932 ************************************ 00:33:10.932 END TEST nvmf_identify_kernel_target 00:33:10.932 ************************************ 00:33:10.932 15:51:23 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:10.932 15:51:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:10.932 15:51:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:10.932 15:51:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:10.932 ************************************ 00:33:10.932 START TEST nvmf_auth_host 00:33:10.932 ************************************ 00:33:10.932 15:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:11.190 * Looking for test storage... 00:33:11.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:33:11.190 15:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:13.720 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:13.720 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:13.720 Found net devices under 0000:09:00.0: cvl_0_0 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:13.720 Found net devices under 0000:09:00.1: cvl_0_1 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:13.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:13.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:33:13.720 00:33:13.720 --- 10.0.0.2 ping statistics --- 00:33:13.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.720 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:13.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:13.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:33:13.720 00:33:13.720 --- 10.0.0.1 ping statistics --- 00:33:13.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.720 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1459629 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1459629 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 1459629 ']' 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:13.720 15:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:13.721 15:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:13.721 15:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:13.721 15:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.979 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:13.979 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:33:13.979 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:13.979 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:13.979 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.979 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:13.979 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e92fba978601e27372666f9b6ead7ff6 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.30W 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e92fba978601e27372666f9b6ead7ff6 0 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e92fba978601e27372666f9b6ead7ff6 0 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e92fba978601e27372666f9b6ead7ff6 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.30W 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.30W 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.30W 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0f90f6dcc62a564fa6e7fd03c2aef84f5d5b12c82983fab1776690a82a1603b9 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.y4o 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0f90f6dcc62a564fa6e7fd03c2aef84f5d5b12c82983fab1776690a82a1603b9 3 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0f90f6dcc62a564fa6e7fd03c2aef84f5d5b12c82983fab1776690a82a1603b9 3 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0f90f6dcc62a564fa6e7fd03c2aef84f5d5b12c82983fab1776690a82a1603b9 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.y4o 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.y4o 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.y4o 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:14.238 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e04d7ef76e3b682dffcf8656700b68fc818758d6a57c7365 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.VhC 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e04d7ef76e3b682dffcf8656700b68fc818758d6a57c7365 0 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e04d7ef76e3b682dffcf8656700b68fc818758d6a57c7365 0 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e04d7ef76e3b682dffcf8656700b68fc818758d6a57c7365 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.VhC 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.VhC 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.VhC 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=504f3c9b0aab6dbbb0fa0083023d35ef4924034e3c1467c6 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.v9R 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 504f3c9b0aab6dbbb0fa0083023d35ef4924034e3c1467c6 2 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 504f3c9b0aab6dbbb0fa0083023d35ef4924034e3c1467c6 2 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=504f3c9b0aab6dbbb0fa0083023d35ef4924034e3c1467c6 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.v9R 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.v9R 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.v9R 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=412cb1f704a255dcead3e93b6b68f9a2 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Dhl 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 412cb1f704a255dcead3e93b6b68f9a2 1 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 412cb1f704a255dcead3e93b6b68f9a2 1 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=412cb1f704a255dcead3e93b6b68f9a2 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Dhl 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Dhl 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Dhl 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=edbe0746a2124e0085503788e99180b3 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Phx 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key edbe0746a2124e0085503788e99180b3 1 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 edbe0746a2124e0085503788e99180b3 1 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=edbe0746a2124e0085503788e99180b3 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:33:14.239 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Phx 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Phx 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Phx 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5fd75ad7debfbf0949f882151332bf4d4030dabb2b13a872 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.8JQ 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5fd75ad7debfbf0949f882151332bf4d4030dabb2b13a872 2 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5fd75ad7debfbf0949f882151332bf4d4030dabb2b13a872 2 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5fd75ad7debfbf0949f882151332bf4d4030dabb2b13a872 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.8JQ 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.8JQ 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.8JQ 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=003aafda2bd246f89ae43bb9840f8f7c 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.VAn 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 003aafda2bd246f89ae43bb9840f8f7c 0 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 003aafda2bd246f89ae43bb9840f8f7c 0 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=003aafda2bd246f89ae43bb9840f8f7c 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.VAn 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.VAn 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.VAn 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dd33d1147c42dae37255dd42ae16664e4dff22daafa5ef111881fef046f621a5 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.6EU 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dd33d1147c42dae37255dd42ae16664e4dff22daafa5ef111881fef046f621a5 3 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dd33d1147c42dae37255dd42ae16664e4dff22daafa5ef111881fef046f621a5 3 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dd33d1147c42dae37255dd42ae16664e4dff22daafa5ef111881fef046f621a5 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.6EU 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.6EU 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.6EU 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1459629 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 1459629 ']' 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:14.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:14.498 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.756 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:14.756 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:33:14.756 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:14.756 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.30W 00:33:14.756 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.756 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.756 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.756 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.y4o ]] 00:33:14.756 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.y4o 00:33:14.756 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.756 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.VhC 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.v9R ]] 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.v9R 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Dhl 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Phx ]] 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Phx 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.8JQ 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.VAn ]] 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.VAn 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.6EU 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:14.757 15:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:16.131 Waiting for block devices as requested 00:33:16.131 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:16.131 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:16.388 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:16.388 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:16.388 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:16.388 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:16.647 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:16.647 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:16.647 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:33:16.647 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:16.904 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:16.904 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:16.904 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:16.904 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:17.162 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:17.162 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:17.162 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:17.730 No valid GPT data, bailing 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:33:17.730 00:33:17.730 Discovery Log Number of Records 2, Generation counter 2 00:33:17.730 =====Discovery Log Entry 0====== 00:33:17.730 trtype: tcp 00:33:17.730 adrfam: ipv4 00:33:17.730 subtype: current discovery subsystem 00:33:17.730 treq: not specified, sq flow control disable supported 00:33:17.730 portid: 1 00:33:17.730 trsvcid: 4420 00:33:17.730 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:17.730 traddr: 10.0.0.1 00:33:17.730 eflags: none 00:33:17.730 sectype: none 00:33:17.730 =====Discovery Log Entry 1====== 00:33:17.730 trtype: tcp 00:33:17.730 adrfam: ipv4 00:33:17.730 subtype: nvme subsystem 00:33:17.730 treq: not specified, sq flow control disable supported 00:33:17.730 portid: 1 00:33:17.730 trsvcid: 4420 00:33:17.730 subnqn: nqn.2024-02.io.spdk:cnode0 00:33:17.730 traddr: 10.0.0.1 00:33:17.730 eflags: none 00:33:17.730 sectype: none 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: ]] 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.730 nvme0n1 00:33:17.730 15:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.731 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.731 15:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.731 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: ]] 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.989 15:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.990 15:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:17.990 15:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.990 15:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.990 nvme0n1 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: ]] 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.990 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.248 nvme0n1 00:33:18.248 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.248 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.248 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.248 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.248 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.248 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.248 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.248 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.248 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: ]] 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.249 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.507 nvme0n1 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: ]] 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.507 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.507 nvme0n1 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.508 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.766 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.766 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.766 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.766 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.766 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.766 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.766 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.766 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.766 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.766 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.766 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.767 nvme0n1 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: ]] 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.767 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.025 nvme0n1 00:33:19.025 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.025 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.025 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.025 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.025 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.025 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.025 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.025 15:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.025 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.025 15:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.025 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.025 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.025 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:33:19.025 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.025 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:19.025 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: ]] 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.026 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.284 nvme0n1 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: ]] 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.284 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.543 nvme0n1 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: ]] 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.543 nvme0n1 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.543 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.802 nvme0n1 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.802 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: ]] 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.061 15:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.061 nvme0n1 00:33:20.061 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: ]] 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.319 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.577 nvme0n1 00:33:20.577 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.577 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.577 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.577 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.577 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.577 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.577 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.577 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.577 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.577 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.577 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.577 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.577 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:33:20.577 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.577 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:20.577 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:20.577 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:20.577 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:20.577 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:20.577 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:20.577 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:20.577 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:20.577 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: ]] 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.578 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.836 nvme0n1 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: ]] 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.836 15:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.094 nvme0n1 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.094 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.353 nvme0n1 00:33:21.353 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.353 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.353 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.353 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.353 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.353 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.353 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.353 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.353 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.353 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: ]] 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.612 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.178 nvme0n1 00:33:22.178 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.178 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.178 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.178 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.178 15:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.178 15:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: ]] 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.178 15:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.179 15:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.179 15:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.179 15:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.179 15:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.179 15:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.179 15:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.179 15:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.179 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:22.179 15:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.179 15:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.465 nvme0n1 00:33:22.465 15:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.465 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.465 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.465 15:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.465 15:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: ]] 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.723 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:22.724 15:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.724 15:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.724 15:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.724 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.724 15:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.724 15:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.724 15:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.724 15:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.724 15:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.724 15:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.724 15:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.724 15:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.724 15:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.724 15:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.724 15:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:22.724 15:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.724 15:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.290 nvme0n1 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: ]] 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.290 15:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.549 nvme0n1 00:33:23.549 15:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.549 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.549 15:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.549 15:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.549 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.817 15:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.818 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.818 15:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.818 15:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.818 15:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.818 15:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.818 15:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.818 15:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.818 15:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.818 15:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.818 15:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.818 15:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.818 15:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:23.818 15:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.818 15:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.384 nvme0n1 00:33:24.384 15:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: ]] 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.385 15:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.319 nvme0n1 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: ]] 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.319 15:51:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.254 nvme0n1 00:33:26.254 15:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.254 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:26.254 15:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.254 15:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.254 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:26.254 15:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.254 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:26.254 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:26.254 15:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.254 15:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.254 15:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.254 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:26.254 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:33:26.254 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:26.254 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:26.254 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:26.254 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: ]] 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.255 15:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.190 nvme0n1 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: ]] 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.190 15:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.123 nvme0n1 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.123 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:28.124 15:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:28.124 15:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:28.124 15:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:28.124 15:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:28.124 15:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:28.124 15:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:28.124 15:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:28.124 15:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:28.124 15:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:28.124 15:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:28.124 15:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:28.124 15:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.124 15:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.057 nvme0n1 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: ]] 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.057 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.316 nvme0n1 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: ]] 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:29.316 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:29.317 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:29.317 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:29.317 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:29.317 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.317 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.317 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.317 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:29.317 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:29.317 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:29.317 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:29.317 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:29.317 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:29.317 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:29.317 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:29.317 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:29.317 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:29.317 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:29.317 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:29.317 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.317 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.575 nvme0n1 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: ]] 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:29.575 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:29.576 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:29.576 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.576 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.576 nvme0n1 00:33:29.576 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.576 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:29.576 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.576 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.576 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:29.576 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: ]] 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.834 nvme0n1 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:29.834 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.835 15:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.093 nvme0n1 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: ]] 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.093 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.351 nvme0n1 00:33:30.351 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.351 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:30.351 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:30.351 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.351 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.351 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.351 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:30.351 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:30.351 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: ]] 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.352 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.609 nvme0n1 00:33:30.609 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.609 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:30.609 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:30.609 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.609 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.609 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.609 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:30.609 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:30.609 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.609 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.609 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.609 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:30.609 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:33:30.609 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:30.609 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:30.609 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:30.609 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:30.609 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:30.609 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:30.609 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: ]] 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.610 nvme0n1 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.610 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.868 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: ]] 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.869 nvme0n1 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.869 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.128 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.128 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:31.128 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:31.128 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:31.128 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:31.128 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.128 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.128 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:31.128 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:31.128 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:31.128 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:31.128 15:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:31.128 15:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:31.128 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.128 15:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.128 nvme0n1 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: ]] 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.128 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.387 nvme0n1 00:33:31.387 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.387 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.387 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.387 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.387 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:31.387 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: ]] 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.645 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.903 nvme0n1 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: ]] 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:31.903 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:31.904 15:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:31.904 15:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:31.904 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.904 15:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.162 nvme0n1 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: ]] 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.162 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.421 nvme0n1 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.421 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.679 nvme0n1 00:33:32.679 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.679 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:32.679 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:32.679 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.679 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.679 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.679 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:32.679 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:32.679 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.679 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.679 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.679 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:32.679 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:32.679 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:33:32.679 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:32.679 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:32.679 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:32.679 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:32.679 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:32.679 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:32.679 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:32.680 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:32.680 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:32.680 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: ]] 00:33:32.680 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:32.680 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:33:32.680 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:32.680 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:32.680 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:32.680 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:32.680 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:32.680 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:32.680 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.680 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.938 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.938 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:32.938 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:32.938 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:32.938 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:32.938 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:32.938 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:32.938 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:32.938 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:32.938 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:32.938 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:32.938 15:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:32.938 15:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:32.938 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.938 15:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.196 nvme0n1 00:33:33.196 15:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.196 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:33.196 15:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.196 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:33.196 15:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.196 15:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.196 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:33.196 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:33.196 15:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.196 15:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.196 15:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.196 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:33.196 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:33:33.196 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:33.196 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:33.196 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:33.196 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:33.196 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:33.196 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:33.196 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:33.196 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:33.196 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:33.196 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: ]] 00:33:33.197 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:33.197 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:33:33.197 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:33.197 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:33.197 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:33.197 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:33.197 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:33.197 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:33.197 15:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.197 15:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.455 15:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.455 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:33.455 15:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:33.455 15:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:33.455 15:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:33.455 15:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:33.455 15:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:33.455 15:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:33.455 15:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:33.455 15:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:33.455 15:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:33.455 15:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:33.455 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:33.455 15:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.455 15:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.021 nvme0n1 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: ]] 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.021 15:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.587 nvme0n1 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: ]] 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.587 15:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.152 nvme0n1 00:33:35.152 15:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.152 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:35.152 15:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:35.153 15:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.153 15:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.153 15:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.153 15:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.718 nvme0n1 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: ]] 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.718 15:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.649 nvme0n1 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: ]] 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:36.649 15:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:36.650 15:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:36.650 15:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:36.650 15:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.650 15:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.584 nvme0n1 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: ]] 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.584 15:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.575 nvme0n1 00:33:38.575 15:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.575 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:38.575 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:38.575 15:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.575 15:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.575 15:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.575 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:38.575 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:38.575 15:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.575 15:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.575 15:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.575 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:38.575 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:33:38.575 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:38.575 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:38.575 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:38.575 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:38.575 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:38.575 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:38.575 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:38.575 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: ]] 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.576 15:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.508 nvme0n1 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.508 15:51:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.441 nvme0n1 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: ]] 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.441 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.698 nvme0n1 00:33:40.698 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.698 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:40.698 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:40.698 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: ]] 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.699 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.956 nvme0n1 00:33:40.956 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.956 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:40.956 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.956 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:40.956 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.956 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.956 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:40.956 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:40.956 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.956 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.956 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.956 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:40.956 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:33:40.956 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:40.956 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:40.956 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:40.956 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:40.956 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:40.956 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:40.956 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: ]] 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.957 15:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.957 nvme0n1 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: ]] 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.957 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.216 nvme0n1 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.216 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.475 nvme0n1 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: ]] 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.475 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.733 nvme0n1 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: ]] 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:41.733 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.734 nvme0n1 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:41.734 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: ]] 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.992 15:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.992 nvme0n1 00:33:41.992 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.992 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:41.992 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.992 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:41.992 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.992 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.992 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:41.992 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:41.992 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.992 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.249 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.249 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:42.249 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: ]] 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.250 nvme0n1 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.250 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.508 nvme0n1 00:33:42.508 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.508 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:42.508 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.508 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:42.508 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.508 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.508 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:42.508 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:42.508 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.508 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.508 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.508 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:42.508 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:42.508 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:42.508 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:42.508 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:42.508 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:42.508 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:42.508 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:42.508 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: ]] 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.509 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.765 nvme0n1 00:33:42.765 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.765 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:42.765 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.765 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:42.765 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.765 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.765 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:42.765 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:42.765 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.765 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.765 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.765 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:42.765 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:42.765 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:42.766 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:42.766 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:42.766 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:42.766 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:42.766 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:42.766 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:42.766 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:42.766 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:42.766 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: ]] 00:33:42.766 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:42.766 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:42.766 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:42.766 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:42.766 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:42.766 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:42.766 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:42.766 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:42.766 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.766 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.023 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.023 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:43.023 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:43.023 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:43.023 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:43.023 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:43.023 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:43.023 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:43.023 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:43.023 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:43.023 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:43.023 15:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:43.023 15:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:43.023 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.023 15:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.023 nvme0n1 00:33:43.023 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.023 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:43.023 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:43.023 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.023 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.023 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.280 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:43.280 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:43.280 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.280 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.280 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.280 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: ]] 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.281 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.538 nvme0n1 00:33:43.538 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.538 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:43.538 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:43.538 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.538 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.538 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.538 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:43.538 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:43.538 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.538 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.538 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.538 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:43.538 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:43.538 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:43.538 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:43.538 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:43.538 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:43.538 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:43.538 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:43.538 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:43.538 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:43.538 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: ]] 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.539 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.797 nvme0n1 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.797 15:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.057 nvme0n1 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: ]] 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.057 15:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.621 nvme0n1 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: ]] 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.621 15:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.188 nvme0n1 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: ]] 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.188 15:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.754 nvme0n1 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: ]] 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.754 15:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.755 15:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:45.755 15:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.755 15:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:45.755 15:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:45.755 15:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:45.755 15:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:45.755 15:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.755 15:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.321 nvme0n1 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.321 15:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.888 nvme0n1 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyZmJhOTc4NjAxZTI3MzcyNjY2ZjliNmVhZDdmZjbwdzq0: 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: ]] 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGY2ZGNjNjJhNTY0ZmE2ZTdmZDAzYzJhZWY4NGY1ZDViMTJjODI5ODNmYWIxNzc2NjkwYTgyYTE2MDNiOb4pg+0=: 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.888 15:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.821 nvme0n1 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: ]] 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.821 15:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.755 nvme0n1 00:33:48.755 15:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.755 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:48.755 15:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.755 15:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.755 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:48.755 15:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.755 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:48.755 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:48.755 15:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.755 15:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.755 15:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.755 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:48.755 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:48.755 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:48.755 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDEyY2IxZjcwNGEyNTVkY2VhZDNlOTNiNmI2OGY5YTLVRPAj: 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: ]] 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRiZTA3NDZhMjEyNGUwMDg1NTAzNzg4ZTk5MTgwYjOP/VhJ: 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.756 15:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.690 nvme0n1 00:33:49.690 15:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.690 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.690 15:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.690 15:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.690 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.690 15:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.947 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.947 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWZkNzVhZDdkZWJmYmYwOTQ5Zjg4MjE1MTMzMmJmNGQ0MDMwZGFiYjJiMTNhODcyQTrJoA==: 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: ]] 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDAzYWFmZGEyYmQyNDZmODlhZTQzYmI5ODQwZjhmN2OQaHhJ: 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.948 15:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.881 nvme0n1 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzM2QxMTQ3YzQyZGFlMzcyNTVkZDQyYWUxNjY2NGU0ZGZmMjJkYWFmYTVlZjExMTg4MWZlZjA0NmY2MjFhNU2/zvs=: 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.881 15:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.815 nvme0n1 00:33:51.815 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.815 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:51.815 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:51.815 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.815 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.815 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.815 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.815 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:51.815 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA0ZDdlZjc2ZTNiNjgyZGZmY2Y4NjU2NzAwYjY4ZmM4MTg3NThkNmE1N2M3MzY1jG/rqg==: 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: ]] 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZjNjOWIwYWFiNmRiYmIwZmEwMDgzMDIzZDM1ZWY0OTI0MDM0ZTNjMTQ2N2M2scXmmA==: 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.816 request: 00:33:51.816 { 00:33:51.816 "name": "nvme0", 00:33:51.816 "trtype": "tcp", 00:33:51.816 "traddr": "10.0.0.1", 00:33:51.816 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:51.816 "adrfam": "ipv4", 00:33:51.816 "trsvcid": "4420", 00:33:51.816 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:51.816 "method": "bdev_nvme_attach_controller", 00:33:51.816 "req_id": 1 00:33:51.816 } 00:33:51.816 Got JSON-RPC error response 00:33:51.816 response: 00:33:51.816 { 00:33:51.816 "code": -32602, 00:33:51.816 "message": "Invalid parameters" 00:33:51.816 } 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.816 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.075 request: 00:33:52.075 { 00:33:52.075 "name": "nvme0", 00:33:52.075 "trtype": "tcp", 00:33:52.075 "traddr": "10.0.0.1", 00:33:52.075 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:52.075 "adrfam": "ipv4", 00:33:52.075 "trsvcid": "4420", 00:33:52.075 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:52.075 "dhchap_key": "key2", 00:33:52.075 "method": "bdev_nvme_attach_controller", 00:33:52.075 "req_id": 1 00:33:52.075 } 00:33:52.075 Got JSON-RPC error response 00:33:52.075 response: 00:33:52.075 { 00:33:52.075 "code": -32602, 00:33:52.075 "message": "Invalid parameters" 00:33:52.075 } 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.075 15:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.075 request: 00:33:52.075 { 00:33:52.075 "name": "nvme0", 00:33:52.075 "trtype": "tcp", 00:33:52.075 "traddr": "10.0.0.1", 00:33:52.075 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:52.075 "adrfam": "ipv4", 00:33:52.075 "trsvcid": "4420", 00:33:52.075 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:52.075 "dhchap_key": "key1", 00:33:52.075 "dhchap_ctrlr_key": "ckey2", 00:33:52.075 "method": "bdev_nvme_attach_controller", 00:33:52.075 "req_id": 1 00:33:52.075 } 00:33:52.075 Got JSON-RPC error response 00:33:52.075 response: 00:33:52.075 { 00:33:52.075 "code": -32602, 00:33:52.075 "message": "Invalid parameters" 00:33:52.075 } 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:52.075 rmmod nvme_tcp 00:33:52.075 rmmod nvme_fabrics 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1459629 ']' 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1459629 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 1459629 ']' 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 1459629 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1459629 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1459629' 00:33:52.075 killing process with pid 1459629 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 1459629 00:33:52.075 15:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 1459629 00:33:52.333 15:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:52.333 15:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:52.333 15:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:52.333 15:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:52.333 15:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:52.334 15:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:52.334 15:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:52.334 15:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:54.235 15:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:54.235 15:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:54.494 15:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:54.494 15:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:54.494 15:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:54.494 15:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:54.494 15:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:54.494 15:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:54.494 15:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:54.494 15:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:54.494 15:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:54.494 15:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:54.494 15:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:55.933 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:55.933 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:55.933 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:55.933 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:55.933 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:55.933 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:55.933 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:55.933 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:55.933 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:55.933 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:55.933 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:55.933 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:55.933 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:55.933 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:55.933 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:55.933 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:56.865 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:33:56.865 15:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.30W /tmp/spdk.key-null.VhC /tmp/spdk.key-sha256.Dhl /tmp/spdk.key-sha384.8JQ /tmp/spdk.key-sha512.6EU /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:56.865 15:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:58.241 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:58.241 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:58.241 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:58.241 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:58.241 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:58.241 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:58.241 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:58.241 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:58.241 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:58.241 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:58.241 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:58.241 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:58.241 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:58.241 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:58.241 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:58.241 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:58.241 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:58.500 00:33:58.500 real 0m47.348s 00:33:58.500 user 0m44.053s 00:33:58.500 sys 0m6.252s 00:33:58.500 15:52:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:58.500 15:52:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.500 ************************************ 00:33:58.500 END TEST nvmf_auth_host 00:33:58.500 ************************************ 00:33:58.500 15:52:11 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:33:58.500 15:52:11 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:58.500 15:52:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:58.500 15:52:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:58.500 15:52:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:58.500 ************************************ 00:33:58.500 START TEST nvmf_digest 00:33:58.500 ************************************ 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:58.500 * Looking for test storage... 00:33:58.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:58.500 15:52:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:01.030 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:01.031 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:01.031 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:01.031 Found net devices under 0000:09:00.0: cvl_0_0 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:01.031 Found net devices under 0000:09:00.1: cvl_0_1 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:01.031 15:52:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:01.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:01.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:34:01.031 00:34:01.031 --- 10.0.0.2 ping statistics --- 00:34:01.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:01.031 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:01.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:01.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:34:01.031 00:34:01.031 --- 10.0.0.1 ping statistics --- 00:34:01.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:01.031 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:01.031 ************************************ 00:34:01.031 START TEST nvmf_digest_clean 00:34:01.031 ************************************ 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:01.031 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1469377 00:34:01.032 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:01.032 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1469377 00:34:01.032 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1469377 ']' 00:34:01.032 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:01.032 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:01.032 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:01.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:01.032 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:01.032 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:01.032 [2024-05-15 15:52:14.109648] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:34:01.032 [2024-05-15 15:52:14.109716] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:01.290 EAL: No free 2048 kB hugepages reported on node 1 00:34:01.290 [2024-05-15 15:52:14.152721] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:01.290 [2024-05-15 15:52:14.183608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.290 [2024-05-15 15:52:14.262447] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:01.290 [2024-05-15 15:52:14.262511] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:01.290 [2024-05-15 15:52:14.262534] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:01.290 [2024-05-15 15:52:14.262545] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:01.290 [2024-05-15 15:52:14.262554] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:01.290 [2024-05-15 15:52:14.262584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:01.290 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:01.290 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:34:01.290 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:01.290 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:01.290 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:01.290 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:01.290 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:01.290 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:01.290 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:01.290 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.290 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:01.549 null0 00:34:01.549 [2024-05-15 15:52:14.451575] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:01.549 [2024-05-15 15:52:14.475531] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:34:01.549 [2024-05-15 15:52:14.475809] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:01.549 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.549 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:01.549 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:01.549 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:01.549 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:01.549 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:01.549 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:01.549 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:01.549 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1469513 00:34:01.549 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:01.549 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1469513 /var/tmp/bperf.sock 00:34:01.549 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1469513 ']' 00:34:01.549 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:01.549 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:01.549 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:01.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:01.549 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:01.549 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:01.549 [2024-05-15 15:52:14.521280] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:34:01.549 [2024-05-15 15:52:14.521348] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469513 ] 00:34:01.549 EAL: No free 2048 kB hugepages reported on node 1 00:34:01.549 [2024-05-15 15:52:14.558873] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:01.549 [2024-05-15 15:52:14.593586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.807 [2024-05-15 15:52:14.680937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:01.807 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:01.807 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:34:01.807 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:01.807 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:01.807 15:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:02.065 15:52:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:02.065 15:52:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:02.631 nvme0n1 00:34:02.631 15:52:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:02.631 15:52:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:02.631 Running I/O for 2 seconds... 00:34:04.547 00:34:04.547 Latency(us) 00:34:04.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.547 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:04.547 nvme0n1 : 2.00 17684.61 69.08 0.00 0.00 7229.76 3689.43 21554.06 00:34:04.547 =================================================================================================================== 00:34:04.547 Total : 17684.61 69.08 0.00 0.00 7229.76 3689.43 21554.06 00:34:04.547 0 00:34:04.547 15:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:04.547 15:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:04.547 15:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:04.547 15:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:04.547 15:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:04.547 | select(.opcode=="crc32c") 00:34:04.547 | "\(.module_name) \(.executed)"' 00:34:04.806 15:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:04.806 15:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:04.807 15:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:04.807 15:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:04.807 15:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1469513 00:34:04.807 15:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1469513 ']' 00:34:04.807 15:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1469513 00:34:04.807 15:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:34:04.807 15:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:04.807 15:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1469513 00:34:04.807 15:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:04.807 15:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:04.807 15:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1469513' 00:34:04.807 killing process with pid 1469513 00:34:04.807 15:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1469513 00:34:04.807 Received shutdown signal, test time was about 2.000000 seconds 00:34:04.807 00:34:04.807 Latency(us) 00:34:04.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.807 =================================================================================================================== 00:34:04.807 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:04.807 15:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1469513 00:34:05.065 15:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:05.065 15:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:05.065 15:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:05.065 15:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:05.065 15:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:05.065 15:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:05.065 15:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:05.065 15:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1469926 00:34:05.065 15:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:05.065 15:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1469926 /var/tmp/bperf.sock 00:34:05.065 15:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1469926 ']' 00:34:05.065 15:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:05.065 15:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:05.065 15:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:05.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:05.065 15:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:05.066 15:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:05.066 [2024-05-15 15:52:18.152173] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:34:05.066 [2024-05-15 15:52:18.152266] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469926 ] 00:34:05.066 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:05.066 Zero copy mechanism will not be used. 00:34:05.324 EAL: No free 2048 kB hugepages reported on node 1 00:34:05.324 [2024-05-15 15:52:18.187905] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:05.324 [2024-05-15 15:52:18.224767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.324 [2024-05-15 15:52:18.316009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:05.324 15:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:05.324 15:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:34:05.324 15:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:05.324 15:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:05.324 15:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:05.582 15:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:05.582 15:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:06.149 nvme0n1 00:34:06.149 15:52:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:06.149 15:52:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:06.407 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:06.407 Zero copy mechanism will not be used. 00:34:06.407 Running I/O for 2 seconds... 00:34:08.308 00:34:08.308 Latency(us) 00:34:08.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:08.308 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:08.308 nvme0n1 : 2.00 3717.85 464.73 0.00 0.00 4298.69 1365.33 12815.93 00:34:08.308 =================================================================================================================== 00:34:08.308 Total : 3717.85 464.73 0.00 0.00 4298.69 1365.33 12815.93 00:34:08.308 0 00:34:08.308 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:08.308 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:08.308 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:08.308 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:08.308 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:08.308 | select(.opcode=="crc32c") 00:34:08.308 | "\(.module_name) \(.executed)"' 00:34:08.566 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:08.567 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:08.567 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:08.567 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:08.567 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1469926 00:34:08.567 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1469926 ']' 00:34:08.567 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1469926 00:34:08.567 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:34:08.567 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:08.567 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1469926 00:34:08.567 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:08.567 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:08.567 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1469926' 00:34:08.567 killing process with pid 1469926 00:34:08.567 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1469926 00:34:08.567 Received shutdown signal, test time was about 2.000000 seconds 00:34:08.567 00:34:08.567 Latency(us) 00:34:08.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:08.567 =================================================================================================================== 00:34:08.567 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:08.567 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1469926 00:34:08.825 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:34:08.825 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:08.825 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:08.825 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:08.825 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:08.825 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:08.825 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:08.825 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1470327 00:34:08.825 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:08.825 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1470327 /var/tmp/bperf.sock 00:34:08.825 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1470327 ']' 00:34:08.825 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:08.825 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:08.825 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:08.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:08.825 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:08.825 15:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:08.825 [2024-05-15 15:52:21.904588] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:34:08.825 [2024-05-15 15:52:21.904673] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1470327 ] 00:34:09.083 EAL: No free 2048 kB hugepages reported on node 1 00:34:09.083 [2024-05-15 15:52:21.940855] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:09.083 [2024-05-15 15:52:21.978782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:09.083 [2024-05-15 15:52:22.069469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:09.083 15:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:09.083 15:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:34:09.083 15:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:09.083 15:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:09.083 15:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:09.648 15:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:09.648 15:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:09.906 nvme0n1 00:34:09.906 15:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:09.906 15:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:09.906 Running I/O for 2 seconds... 00:34:11.801 00:34:11.801 Latency(us) 00:34:11.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.801 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:11.802 nvme0n1 : 2.01 20543.14 80.25 0.00 0.00 6220.03 3422.44 17476.27 00:34:11.802 =================================================================================================================== 00:34:11.802 Total : 20543.14 80.25 0.00 0.00 6220.03 3422.44 17476.27 00:34:11.802 0 00:34:12.059 15:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:12.059 15:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:12.059 15:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:12.059 15:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:12.059 15:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:12.059 | select(.opcode=="crc32c") 00:34:12.059 | "\(.module_name) \(.executed)"' 00:34:12.059 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:12.059 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:12.059 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:12.059 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:12.059 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1470327 00:34:12.059 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1470327 ']' 00:34:12.059 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1470327 00:34:12.059 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:34:12.059 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:12.059 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1470327 00:34:12.317 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:12.317 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:12.317 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1470327' 00:34:12.317 killing process with pid 1470327 00:34:12.317 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1470327 00:34:12.317 Received shutdown signal, test time was about 2.000000 seconds 00:34:12.317 00:34:12.317 Latency(us) 00:34:12.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:12.317 =================================================================================================================== 00:34:12.317 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:12.317 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1470327 00:34:12.574 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:34:12.574 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:12.574 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:12.574 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:12.574 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:12.574 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:12.574 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:12.574 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1470738 00:34:12.574 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:12.574 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1470738 /var/tmp/bperf.sock 00:34:12.574 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1470738 ']' 00:34:12.574 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:12.574 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:12.574 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:12.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:12.574 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:12.574 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:12.574 [2024-05-15 15:52:25.468903] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:34:12.574 [2024-05-15 15:52:25.468989] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1470738 ] 00:34:12.574 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:12.574 Zero copy mechanism will not be used. 00:34:12.574 EAL: No free 2048 kB hugepages reported on node 1 00:34:12.574 [2024-05-15 15:52:25.504883] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:12.574 [2024-05-15 15:52:25.542019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:12.574 [2024-05-15 15:52:25.630222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:12.574 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:12.574 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:34:12.574 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:12.574 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:12.574 15:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:13.141 15:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:13.141 15:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:13.399 nvme0n1 00:34:13.399 15:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:13.399 15:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:13.673 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:13.673 Zero copy mechanism will not be used. 00:34:13.673 Running I/O for 2 seconds... 00:34:15.593 00:34:15.593 Latency(us) 00:34:15.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:15.593 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:15.593 nvme0n1 : 2.00 3687.90 460.99 0.00 0.00 4328.44 3252.53 12524.66 00:34:15.593 =================================================================================================================== 00:34:15.593 Total : 3687.90 460.99 0.00 0.00 4328.44 3252.53 12524.66 00:34:15.593 0 00:34:15.593 15:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:15.593 15:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:15.593 15:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:15.593 15:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:15.593 15:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:15.593 | select(.opcode=="crc32c") 00:34:15.593 | "\(.module_name) \(.executed)"' 00:34:15.852 15:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:15.852 15:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:15.852 15:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:15.852 15:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:15.852 15:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1470738 00:34:15.852 15:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1470738 ']' 00:34:15.852 15:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1470738 00:34:15.852 15:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:34:15.852 15:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:15.852 15:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1470738 00:34:15.852 15:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:15.852 15:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:15.852 15:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1470738' 00:34:15.852 killing process with pid 1470738 00:34:15.852 15:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1470738 00:34:15.852 Received shutdown signal, test time was about 2.000000 seconds 00:34:15.852 00:34:15.852 Latency(us) 00:34:15.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:15.852 =================================================================================================================== 00:34:15.852 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:15.852 15:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1470738 00:34:16.110 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1469377 00:34:16.110 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1469377 ']' 00:34:16.110 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1469377 00:34:16.110 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:34:16.110 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:16.110 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1469377 00:34:16.110 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:16.110 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:16.110 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1469377' 00:34:16.110 killing process with pid 1469377 00:34:16.110 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1469377 00:34:16.110 [2024-05-15 15:52:29.098999] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:34:16.110 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1469377 00:34:16.368 00:34:16.368 real 0m15.271s 00:34:16.368 user 0m30.400s 00:34:16.368 sys 0m4.001s 00:34:16.368 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:16.368 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:16.368 ************************************ 00:34:16.368 END TEST nvmf_digest_clean 00:34:16.368 ************************************ 00:34:16.368 15:52:29 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:34:16.368 15:52:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:16.368 15:52:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:16.368 15:52:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:16.368 ************************************ 00:34:16.368 START TEST nvmf_digest_error 00:34:16.368 ************************************ 00:34:16.368 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:34:16.368 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:34:16.368 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:16.368 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:16.368 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:16.368 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1471292 00:34:16.368 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:16.368 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1471292 00:34:16.368 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1471292 ']' 00:34:16.368 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:16.368 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:16.368 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:16.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:16.368 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:16.368 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:16.368 [2024-05-15 15:52:29.443798] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:34:16.368 [2024-05-15 15:52:29.443883] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:16.627 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.627 [2024-05-15 15:52:29.494187] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:16.627 [2024-05-15 15:52:29.525397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.627 [2024-05-15 15:52:29.607430] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:16.627 [2024-05-15 15:52:29.607493] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:16.627 [2024-05-15 15:52:29.607506] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:16.627 [2024-05-15 15:52:29.607517] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:16.627 [2024-05-15 15:52:29.607527] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:16.627 [2024-05-15 15:52:29.607554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.627 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:16.627 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:34:16.627 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:16.627 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:16.627 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:16.627 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:16.627 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:34:16.627 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.627 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:16.627 [2024-05-15 15:52:29.696172] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:34:16.627 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.627 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:34:16.627 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:34:16.627 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.627 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:16.886 null0 00:34:16.886 [2024-05-15 15:52:29.815408] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:16.886 [2024-05-15 15:52:29.839374] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:34:16.886 [2024-05-15 15:52:29.839680] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.886 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.886 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:34:16.886 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:16.886 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:16.886 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:16.886 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:16.886 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1471315 00:34:16.886 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1471315 /var/tmp/bperf.sock 00:34:16.886 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:34:16.886 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1471315 ']' 00:34:16.886 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:16.886 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:16.886 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:16.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:16.886 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:16.886 15:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:16.886 [2024-05-15 15:52:29.888794] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:34:16.886 [2024-05-15 15:52:29.888868] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471315 ] 00:34:16.886 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.886 [2024-05-15 15:52:29.932029] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:16.886 [2024-05-15 15:52:29.966946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.143 [2024-05-15 15:52:30.067836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:17.143 15:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:17.143 15:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:34:17.143 15:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:17.143 15:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:17.402 15:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:17.402 15:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.402 15:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:17.402 15:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.402 15:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:17.402 15:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:17.967 nvme0n1 00:34:17.967 15:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:17.967 15:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.967 15:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:17.967 15:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.968 15:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:17.968 15:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:17.968 Running I/O for 2 seconds... 00:34:17.968 [2024-05-15 15:52:31.018432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:17.968 [2024-05-15 15:52:31.018482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.968 [2024-05-15 15:52:31.018508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.968 [2024-05-15 15:52:31.035061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:17.968 [2024-05-15 15:52:31.035097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.968 [2024-05-15 15:52:31.035134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.968 [2024-05-15 15:52:31.046997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:17.968 [2024-05-15 15:52:31.047033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.968 [2024-05-15 15:52:31.047057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.968 [2024-05-15 15:52:31.061785] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:17.968 [2024-05-15 15:52:31.061822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.968 [2024-05-15 15:52:31.061847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.226 [2024-05-15 15:52:31.075269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.226 [2024-05-15 15:52:31.075305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.226 [2024-05-15 15:52:31.075325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.226 [2024-05-15 15:52:31.093163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.226 [2024-05-15 15:52:31.093207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.226 [2024-05-15 15:52:31.093245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.226 [2024-05-15 15:52:31.108710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.226 [2024-05-15 15:52:31.108754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.226 [2024-05-15 15:52:31.108798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.226 [2024-05-15 15:52:31.121482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.226 [2024-05-15 15:52:31.121517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.226 [2024-05-15 15:52:31.121537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.226 [2024-05-15 15:52:31.137178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.226 [2024-05-15 15:52:31.137225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.226 [2024-05-15 15:52:31.137248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.226 [2024-05-15 15:52:31.149602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.226 [2024-05-15 15:52:31.149637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.226 [2024-05-15 15:52:31.149666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.226 [2024-05-15 15:52:31.164168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.226 [2024-05-15 15:52:31.164203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.226 [2024-05-15 15:52:31.164244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.226 [2024-05-15 15:52:31.176077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.226 [2024-05-15 15:52:31.176112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.226 [2024-05-15 15:52:31.176132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.226 [2024-05-15 15:52:31.191927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.226 [2024-05-15 15:52:31.191962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.226 [2024-05-15 15:52:31.191983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.226 [2024-05-15 15:52:31.204517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.226 [2024-05-15 15:52:31.204553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.226 [2024-05-15 15:52:31.204580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.226 [2024-05-15 15:52:31.220493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.226 [2024-05-15 15:52:31.220528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.226 [2024-05-15 15:52:31.220548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.226 [2024-05-15 15:52:31.232546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.226 [2024-05-15 15:52:31.232581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.226 [2024-05-15 15:52:31.232601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.226 [2024-05-15 15:52:31.250280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.226 [2024-05-15 15:52:31.250315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.226 [2024-05-15 15:52:31.250336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.226 [2024-05-15 15:52:31.266848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.226 [2024-05-15 15:52:31.266885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.226 [2024-05-15 15:52:31.266905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.226 [2024-05-15 15:52:31.280337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.226 [2024-05-15 15:52:31.280373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.226 [2024-05-15 15:52:31.280393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.226 [2024-05-15 15:52:31.293532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.226 [2024-05-15 15:52:31.293567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.226 [2024-05-15 15:52:31.293587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.227 [2024-05-15 15:52:31.307540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.227 [2024-05-15 15:52:31.307576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.227 [2024-05-15 15:52:31.307597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.227 [2024-05-15 15:52:31.318946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.227 [2024-05-15 15:52:31.318981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.227 [2024-05-15 15:52:31.319001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.485 [2024-05-15 15:52:31.334117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.485 [2024-05-15 15:52:31.334158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.485 [2024-05-15 15:52:31.334181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.485 [2024-05-15 15:52:31.345992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.485 [2024-05-15 15:52:31.346028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.485 [2024-05-15 15:52:31.346048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.485 [2024-05-15 15:52:31.363517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.485 [2024-05-15 15:52:31.363554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.485 [2024-05-15 15:52:31.363574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.485 [2024-05-15 15:52:31.379693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.485 [2024-05-15 15:52:31.379729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.485 [2024-05-15 15:52:31.379748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.485 [2024-05-15 15:52:31.392393] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.485 [2024-05-15 15:52:31.392432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.485 [2024-05-15 15:52:31.392452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.485 [2024-05-15 15:52:31.407414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.485 [2024-05-15 15:52:31.407449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.485 [2024-05-15 15:52:31.407469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.485 [2024-05-15 15:52:31.418705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.485 [2024-05-15 15:52:31.418739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.485 [2024-05-15 15:52:31.418760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.485 [2024-05-15 15:52:31.435416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.485 [2024-05-15 15:52:31.435452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.485 [2024-05-15 15:52:31.435472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.485 [2024-05-15 15:52:31.449061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.486 [2024-05-15 15:52:31.449098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.486 [2024-05-15 15:52:31.449117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.486 [2024-05-15 15:52:31.463402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.486 [2024-05-15 15:52:31.463438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.486 [2024-05-15 15:52:31.463457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.486 [2024-05-15 15:52:31.476989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.486 [2024-05-15 15:52:31.477036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.486 [2024-05-15 15:52:31.477058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.486 [2024-05-15 15:52:31.488590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.486 [2024-05-15 15:52:31.488625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.486 [2024-05-15 15:52:31.488646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.486 [2024-05-15 15:52:31.502824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.486 [2024-05-15 15:52:31.502860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.486 [2024-05-15 15:52:31.502881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.486 [2024-05-15 15:52:31.519240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.486 [2024-05-15 15:52:31.519277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.486 [2024-05-15 15:52:31.519297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.486 [2024-05-15 15:52:31.531695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.486 [2024-05-15 15:52:31.531731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.486 [2024-05-15 15:52:31.531751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.486 [2024-05-15 15:52:31.550759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.486 [2024-05-15 15:52:31.550795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.486 [2024-05-15 15:52:31.550815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.486 [2024-05-15 15:52:31.563285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.486 [2024-05-15 15:52:31.563319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.486 [2024-05-15 15:52:31.563339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.486 [2024-05-15 15:52:31.576246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.486 [2024-05-15 15:52:31.576280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.486 [2024-05-15 15:52:31.576306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.744 [2024-05-15 15:52:31.590983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.744 [2024-05-15 15:52:31.591029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.744 [2024-05-15 15:52:31.591060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.744 [2024-05-15 15:52:31.604082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.744 [2024-05-15 15:52:31.604118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.744 [2024-05-15 15:52:31.604137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.744 [2024-05-15 15:52:31.618909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.745 [2024-05-15 15:52:31.618944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.745 [2024-05-15 15:52:31.618964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.745 [2024-05-15 15:52:31.636115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.745 [2024-05-15 15:52:31.636151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.745 [2024-05-15 15:52:31.636171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.745 [2024-05-15 15:52:31.648198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.745 [2024-05-15 15:52:31.648242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.745 [2024-05-15 15:52:31.648271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.745 [2024-05-15 15:52:31.663472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.745 [2024-05-15 15:52:31.663507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.745 [2024-05-15 15:52:31.663527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.745 [2024-05-15 15:52:31.675572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.745 [2024-05-15 15:52:31.675607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.745 [2024-05-15 15:52:31.675628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.745 [2024-05-15 15:52:31.690755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.745 [2024-05-15 15:52:31.690790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.745 [2024-05-15 15:52:31.690811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.745 [2024-05-15 15:52:31.703420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.745 [2024-05-15 15:52:31.703462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.745 [2024-05-15 15:52:31.703483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.745 [2024-05-15 15:52:31.720002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.745 [2024-05-15 15:52:31.720038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.745 [2024-05-15 15:52:31.720058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.745 [2024-05-15 15:52:31.731179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.745 [2024-05-15 15:52:31.731221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.745 [2024-05-15 15:52:31.731243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.745 [2024-05-15 15:52:31.748282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.745 [2024-05-15 15:52:31.748318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.745 [2024-05-15 15:52:31.748338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.745 [2024-05-15 15:52:31.763940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.745 [2024-05-15 15:52:31.763982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.745 [2024-05-15 15:52:31.764018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.745 [2024-05-15 15:52:31.777030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.745 [2024-05-15 15:52:31.777066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.745 [2024-05-15 15:52:31.777086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.745 [2024-05-15 15:52:31.793197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.745 [2024-05-15 15:52:31.793241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.745 [2024-05-15 15:52:31.793262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.745 [2024-05-15 15:52:31.806060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.745 [2024-05-15 15:52:31.806095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.745 [2024-05-15 15:52:31.806115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.745 [2024-05-15 15:52:31.821456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.745 [2024-05-15 15:52:31.821491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.745 [2024-05-15 15:52:31.821518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:18.745 [2024-05-15 15:52:31.835317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:18.745 [2024-05-15 15:52:31.835353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.745 [2024-05-15 15:52:31.835373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.004 [2024-05-15 15:52:31.850827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.004 [2024-05-15 15:52:31.850863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.004 [2024-05-15 15:52:31.850884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.004 [2024-05-15 15:52:31.865883] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.004 [2024-05-15 15:52:31.865919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.004 [2024-05-15 15:52:31.865939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.004 [2024-05-15 15:52:31.879318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.004 [2024-05-15 15:52:31.879354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.004 [2024-05-15 15:52:31.879374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.004 [2024-05-15 15:52:31.894997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.004 [2024-05-15 15:52:31.895032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.004 [2024-05-15 15:52:31.895051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.004 [2024-05-15 15:52:31.911981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.004 [2024-05-15 15:52:31.912033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.004 [2024-05-15 15:52:31.912063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.004 [2024-05-15 15:52:31.925110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.004 [2024-05-15 15:52:31.925145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.004 [2024-05-15 15:52:31.925165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.004 [2024-05-15 15:52:31.940030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.004 [2024-05-15 15:52:31.940066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.004 [2024-05-15 15:52:31.940087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.004 [2024-05-15 15:52:31.953016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.004 [2024-05-15 15:52:31.953066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.004 [2024-05-15 15:52:31.953088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.004 [2024-05-15 15:52:31.967828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.004 [2024-05-15 15:52:31.967863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.004 [2024-05-15 15:52:31.967884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.004 [2024-05-15 15:52:31.980364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.004 [2024-05-15 15:52:31.980400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.004 [2024-05-15 15:52:31.980420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.004 [2024-05-15 15:52:31.996371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.004 [2024-05-15 15:52:31.996406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.004 [2024-05-15 15:52:31.996426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.004 [2024-05-15 15:52:32.007817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.004 [2024-05-15 15:52:32.007853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.004 [2024-05-15 15:52:32.007873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.004 [2024-05-15 15:52:32.022763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.004 [2024-05-15 15:52:32.022799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.004 [2024-05-15 15:52:32.022820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.004 [2024-05-15 15:52:32.036832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.004 [2024-05-15 15:52:32.036867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.004 [2024-05-15 15:52:32.036887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.004 [2024-05-15 15:52:32.051069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.004 [2024-05-15 15:52:32.051104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.004 [2024-05-15 15:52:32.051148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.004 [2024-05-15 15:52:32.063655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.004 [2024-05-15 15:52:32.063691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.004 [2024-05-15 15:52:32.063711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.004 [2024-05-15 15:52:32.079457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.005 [2024-05-15 15:52:32.079494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.005 [2024-05-15 15:52:32.079514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.005 [2024-05-15 15:52:32.091196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.005 [2024-05-15 15:52:32.091243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.005 [2024-05-15 15:52:32.091264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.264 [2024-05-15 15:52:32.107923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.264 [2024-05-15 15:52:32.107960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.264 [2024-05-15 15:52:32.107980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.264 [2024-05-15 15:52:32.122317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.264 [2024-05-15 15:52:32.122354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.264 [2024-05-15 15:52:32.122374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.264 [2024-05-15 15:52:32.134881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.264 [2024-05-15 15:52:32.134917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.264 [2024-05-15 15:52:32.134938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.264 [2024-05-15 15:52:32.151456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.264 [2024-05-15 15:52:32.151492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.264 [2024-05-15 15:52:32.151512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.264 [2024-05-15 15:52:32.167821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.264 [2024-05-15 15:52:32.167857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.264 [2024-05-15 15:52:32.167878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.264 [2024-05-15 15:52:32.180448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.264 [2024-05-15 15:52:32.180483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.264 [2024-05-15 15:52:32.180503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.264 [2024-05-15 15:52:32.198365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.264 [2024-05-15 15:52:32.198410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.264 [2024-05-15 15:52:32.198436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.264 [2024-05-15 15:52:32.212241] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.264 [2024-05-15 15:52:32.212295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.264 [2024-05-15 15:52:32.212318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.264 [2024-05-15 15:52:32.224576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.264 [2024-05-15 15:52:32.224612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.264 [2024-05-15 15:52:32.224632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.264 [2024-05-15 15:52:32.241225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.264 [2024-05-15 15:52:32.241259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.264 [2024-05-15 15:52:32.241279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.264 [2024-05-15 15:52:32.254972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.264 [2024-05-15 15:52:32.255007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.264 [2024-05-15 15:52:32.255027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.264 [2024-05-15 15:52:32.266170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.264 [2024-05-15 15:52:32.266205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.264 [2024-05-15 15:52:32.266234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.264 [2024-05-15 15:52:32.279979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.264 [2024-05-15 15:52:32.280014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.264 [2024-05-15 15:52:32.280034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.264 [2024-05-15 15:52:32.293326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.264 [2024-05-15 15:52:32.293361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.264 [2024-05-15 15:52:32.293380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.264 [2024-05-15 15:52:32.306258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.264 [2024-05-15 15:52:32.306293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.264 [2024-05-15 15:52:32.306313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.264 [2024-05-15 15:52:32.321070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.264 [2024-05-15 15:52:32.321111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.264 [2024-05-15 15:52:32.321131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.264 [2024-05-15 15:52:32.332988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.264 [2024-05-15 15:52:32.333022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.264 [2024-05-15 15:52:32.333042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.264 [2024-05-15 15:52:32.348251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.264 [2024-05-15 15:52:32.348286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.264 [2024-05-15 15:52:32.348306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.264 [2024-05-15 15:52:32.361629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.264 [2024-05-15 15:52:32.361665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.264 [2024-05-15 15:52:32.361686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.523 [2024-05-15 15:52:32.375359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.523 [2024-05-15 15:52:32.375405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.523 [2024-05-15 15:52:32.375426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.523 [2024-05-15 15:52:32.388520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.523 [2024-05-15 15:52:32.388554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.523 [2024-05-15 15:52:32.388575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.523 [2024-05-15 15:52:32.402683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.523 [2024-05-15 15:52:32.402731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.523 [2024-05-15 15:52:32.402756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.523 [2024-05-15 15:52:32.414350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.523 [2024-05-15 15:52:32.414385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.523 [2024-05-15 15:52:32.414405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.523 [2024-05-15 15:52:32.431221] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.523 [2024-05-15 15:52:32.431256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.523 [2024-05-15 15:52:32.431276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.523 [2024-05-15 15:52:32.443276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.523 [2024-05-15 15:52:32.443311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.523 [2024-05-15 15:52:32.443331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.523 [2024-05-15 15:52:32.460387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.523 [2024-05-15 15:52:32.460422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.523 [2024-05-15 15:52:32.460441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.523 [2024-05-15 15:52:32.476527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.523 [2024-05-15 15:52:32.476562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.523 [2024-05-15 15:52:32.476582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.523 [2024-05-15 15:52:32.489598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.523 [2024-05-15 15:52:32.489634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.523 [2024-05-15 15:52:32.489670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.523 [2024-05-15 15:52:32.502476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.523 [2024-05-15 15:52:32.502513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.523 [2024-05-15 15:52:32.502533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.523 [2024-05-15 15:52:32.519140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.523 [2024-05-15 15:52:32.519177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.523 [2024-05-15 15:52:32.519227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.523 [2024-05-15 15:52:32.531776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.523 [2024-05-15 15:52:32.531812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.523 [2024-05-15 15:52:32.531832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.523 [2024-05-15 15:52:32.549178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.523 [2024-05-15 15:52:32.549223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.523 [2024-05-15 15:52:32.549246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.523 [2024-05-15 15:52:32.565482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.523 [2024-05-15 15:52:32.565517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.523 [2024-05-15 15:52:32.565543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.523 [2024-05-15 15:52:32.581090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.523 [2024-05-15 15:52:32.581126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.523 [2024-05-15 15:52:32.581146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.523 [2024-05-15 15:52:32.599134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.523 [2024-05-15 15:52:32.599169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.523 [2024-05-15 15:52:32.599189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.523 [2024-05-15 15:52:32.614002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.523 [2024-05-15 15:52:32.614037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.523 [2024-05-15 15:52:32.614064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.781 [2024-05-15 15:52:32.627458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.782 [2024-05-15 15:52:32.627495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.782 [2024-05-15 15:52:32.627516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.782 [2024-05-15 15:52:32.643507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.782 [2024-05-15 15:52:32.643549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.782 [2024-05-15 15:52:32.643569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.782 [2024-05-15 15:52:32.655564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.782 [2024-05-15 15:52:32.655599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.782 [2024-05-15 15:52:32.655619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.782 [2024-05-15 15:52:32.673486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.782 [2024-05-15 15:52:32.673522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.782 [2024-05-15 15:52:32.673547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.782 [2024-05-15 15:52:32.687052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.782 [2024-05-15 15:52:32.687087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.782 [2024-05-15 15:52:32.687106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.782 [2024-05-15 15:52:32.699745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.782 [2024-05-15 15:52:32.699779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.782 [2024-05-15 15:52:32.699799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.782 [2024-05-15 15:52:32.714862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.782 [2024-05-15 15:52:32.714897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.782 [2024-05-15 15:52:32.714917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.782 [2024-05-15 15:52:32.728563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.782 [2024-05-15 15:52:32.728599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.782 [2024-05-15 15:52:32.728619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.782 [2024-05-15 15:52:32.744400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.782 [2024-05-15 15:52:32.744435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.782 [2024-05-15 15:52:32.744455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.782 [2024-05-15 15:52:32.760576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.782 [2024-05-15 15:52:32.760620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.782 [2024-05-15 15:52:32.760642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.782 [2024-05-15 15:52:32.773845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.782 [2024-05-15 15:52:32.773891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.782 [2024-05-15 15:52:32.773913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.782 [2024-05-15 15:52:32.786084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.782 [2024-05-15 15:52:32.786119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.782 [2024-05-15 15:52:32.786139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.782 [2024-05-15 15:52:32.800431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.782 [2024-05-15 15:52:32.800475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.782 [2024-05-15 15:52:32.800496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.782 [2024-05-15 15:52:32.814802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.782 [2024-05-15 15:52:32.814837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.782 [2024-05-15 15:52:32.814873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.782 [2024-05-15 15:52:32.826863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.782 [2024-05-15 15:52:32.826899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.782 [2024-05-15 15:52:32.826919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.782 [2024-05-15 15:52:32.843052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.782 [2024-05-15 15:52:32.843086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.782 [2024-05-15 15:52:32.843106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.782 [2024-05-15 15:52:32.860745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.782 [2024-05-15 15:52:32.860780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.782 [2024-05-15 15:52:32.860800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:19.782 [2024-05-15 15:52:32.878584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:19.782 [2024-05-15 15:52:32.878620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.782 [2024-05-15 15:52:32.878641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.040 [2024-05-15 15:52:32.896662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:20.040 [2024-05-15 15:52:32.896698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.040 [2024-05-15 15:52:32.896719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.040 [2024-05-15 15:52:32.912581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:20.040 [2024-05-15 15:52:32.912617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.040 [2024-05-15 15:52:32.912675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.040 [2024-05-15 15:52:32.924123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:20.040 [2024-05-15 15:52:32.924159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.040 [2024-05-15 15:52:32.924179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.040 [2024-05-15 15:52:32.938945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:20.040 [2024-05-15 15:52:32.938981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.040 [2024-05-15 15:52:32.939001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.040 [2024-05-15 15:52:32.951441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:20.040 [2024-05-15 15:52:32.951481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.040 [2024-05-15 15:52:32.951502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.040 [2024-05-15 15:52:32.966190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:20.040 [2024-05-15 15:52:32.966233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.041 [2024-05-15 15:52:32.966256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.041 [2024-05-15 15:52:32.979566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:20.041 [2024-05-15 15:52:32.979601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.041 [2024-05-15 15:52:32.979625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.041 [2024-05-15 15:52:32.996815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a680) 00:34:20.041 [2024-05-15 15:52:32.996851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.041 [2024-05-15 15:52:32.996871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.041 00:34:20.041 Latency(us) 00:34:20.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:20.041 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:20.041 nvme0n1 : 2.01 17618.63 68.82 0.00 0.00 7254.03 3810.80 25049.32 00:34:20.041 =================================================================================================================== 00:34:20.041 Total : 17618.63 68.82 0.00 0.00 7254.03 3810.80 25049.32 00:34:20.041 0 00:34:20.041 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:20.041 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:20.041 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:20.041 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:20.041 | .driver_specific 00:34:20.041 | .nvme_error 00:34:20.041 | .status_code 00:34:20.041 | .command_transient_transport_error' 00:34:20.299 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 138 > 0 )) 00:34:20.299 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1471315 00:34:20.299 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1471315 ']' 00:34:20.299 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1471315 00:34:20.299 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:34:20.299 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:20.299 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1471315 00:34:20.299 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:20.299 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:20.299 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1471315' 00:34:20.299 killing process with pid 1471315 00:34:20.299 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1471315 00:34:20.299 Received shutdown signal, test time was about 2.000000 seconds 00:34:20.299 00:34:20.299 Latency(us) 00:34:20.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:20.299 =================================================================================================================== 00:34:20.299 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:20.299 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1471315 00:34:20.567 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:34:20.567 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:20.567 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:20.567 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:20.567 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:20.567 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1471725 00:34:20.567 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:34:20.567 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1471725 /var/tmp/bperf.sock 00:34:20.567 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1471725 ']' 00:34:20.567 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:20.567 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:20.567 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:20.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:20.567 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:20.567 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:20.567 [2024-05-15 15:52:33.571655] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:34:20.567 [2024-05-15 15:52:33.571742] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471725 ] 00:34:20.567 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:20.567 Zero copy mechanism will not be used. 00:34:20.567 EAL: No free 2048 kB hugepages reported on node 1 00:34:20.567 [2024-05-15 15:52:33.608090] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:20.567 [2024-05-15 15:52:33.646671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:20.824 [2024-05-15 15:52:33.738714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:20.824 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:20.824 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:34:20.824 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:20.824 15:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:21.082 15:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:21.082 15:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.082 15:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:21.082 15:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.082 15:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:21.082 15:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:21.649 nvme0n1 00:34:21.649 15:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:21.649 15:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.649 15:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:21.649 15:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.649 15:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:21.649 15:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:21.649 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:21.649 Zero copy mechanism will not be used. 00:34:21.649 Running I/O for 2 seconds... 00:34:21.649 [2024-05-15 15:52:34.576678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.649 [2024-05-15 15:52:34.576731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.649 [2024-05-15 15:52:34.576759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.649 [2024-05-15 15:52:34.584629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.649 [2024-05-15 15:52:34.584663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.649 [2024-05-15 15:52:34.584690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.649 [2024-05-15 15:52:34.592899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.649 [2024-05-15 15:52:34.592935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.649 [2024-05-15 15:52:34.592958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.649 [2024-05-15 15:52:34.602240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.649 [2024-05-15 15:52:34.602275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.649 [2024-05-15 15:52:34.602300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.649 [2024-05-15 15:52:34.610657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.649 [2024-05-15 15:52:34.610692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.649 [2024-05-15 15:52:34.610715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.649 [2024-05-15 15:52:34.620261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.649 [2024-05-15 15:52:34.620305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.649 [2024-05-15 15:52:34.620326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.649 [2024-05-15 15:52:34.630165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.649 [2024-05-15 15:52:34.630200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.649 [2024-05-15 15:52:34.630233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.649 [2024-05-15 15:52:34.638454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.649 [2024-05-15 15:52:34.638489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.649 [2024-05-15 15:52:34.638508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.649 [2024-05-15 15:52:34.647452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.649 [2024-05-15 15:52:34.647487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.649 [2024-05-15 15:52:34.647507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.649 [2024-05-15 15:52:34.656763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.649 [2024-05-15 15:52:34.656798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.649 [2024-05-15 15:52:34.656827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.649 [2024-05-15 15:52:34.666526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.649 [2024-05-15 15:52:34.666561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.649 [2024-05-15 15:52:34.666580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.649 [2024-05-15 15:52:34.675976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.649 [2024-05-15 15:52:34.676011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.649 [2024-05-15 15:52:34.676030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.649 [2024-05-15 15:52:34.685386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.649 [2024-05-15 15:52:34.685421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.649 [2024-05-15 15:52:34.685441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.649 [2024-05-15 15:52:34.695048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.649 [2024-05-15 15:52:34.695083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.649 [2024-05-15 15:52:34.695103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.649 [2024-05-15 15:52:34.703395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.649 [2024-05-15 15:52:34.703441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.649 [2024-05-15 15:52:34.703462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.649 [2024-05-15 15:52:34.713228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.649 [2024-05-15 15:52:34.713263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.649 [2024-05-15 15:52:34.713288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.649 [2024-05-15 15:52:34.721338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.649 [2024-05-15 15:52:34.721374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.649 [2024-05-15 15:52:34.721393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.649 [2024-05-15 15:52:34.730524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.649 [2024-05-15 15:52:34.730558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.649 [2024-05-15 15:52:34.730578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.649 [2024-05-15 15:52:34.740016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.649 [2024-05-15 15:52:34.740051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.649 [2024-05-15 15:52:34.740072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.649 [2024-05-15 15:52:34.749564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.649 [2024-05-15 15:52:34.749601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.649 [2024-05-15 15:52:34.749631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.908 [2024-05-15 15:52:34.759109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.908 [2024-05-15 15:52:34.759146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.908 [2024-05-15 15:52:34.759166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.908 [2024-05-15 15:52:34.769069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.908 [2024-05-15 15:52:34.769105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.908 [2024-05-15 15:52:34.769125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.908 [2024-05-15 15:52:34.778946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.908 [2024-05-15 15:52:34.778983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.908 [2024-05-15 15:52:34.779021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.908 [2024-05-15 15:52:34.788516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.908 [2024-05-15 15:52:34.788554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.908 [2024-05-15 15:52:34.788573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.908 [2024-05-15 15:52:34.798079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.908 [2024-05-15 15:52:34.798114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.908 [2024-05-15 15:52:34.798133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.908 [2024-05-15 15:52:34.807571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.908 [2024-05-15 15:52:34.807613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.908 [2024-05-15 15:52:34.807632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.908 [2024-05-15 15:52:34.817404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.908 [2024-05-15 15:52:34.817442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.908 [2024-05-15 15:52:34.817462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.908 [2024-05-15 15:52:34.826828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.908 [2024-05-15 15:52:34.826863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.908 [2024-05-15 15:52:34.826890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.908 [2024-05-15 15:52:34.836655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.908 [2024-05-15 15:52:34.836691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.908 [2024-05-15 15:52:34.836710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.908 [2024-05-15 15:52:34.846653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.908 [2024-05-15 15:52:34.846688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.908 [2024-05-15 15:52:34.846709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.908 [2024-05-15 15:52:34.856114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.908 [2024-05-15 15:52:34.856149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.908 [2024-05-15 15:52:34.856168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.908 [2024-05-15 15:52:34.865623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.908 [2024-05-15 15:52:34.865667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.908 [2024-05-15 15:52:34.865689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.908 [2024-05-15 15:52:34.875556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.908 [2024-05-15 15:52:34.875591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.908 [2024-05-15 15:52:34.875610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.908 [2024-05-15 15:52:34.885578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.908 [2024-05-15 15:52:34.885612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.908 [2024-05-15 15:52:34.885632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.908 [2024-05-15 15:52:34.895679] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.908 [2024-05-15 15:52:34.895714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.908 [2024-05-15 15:52:34.895734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.908 [2024-05-15 15:52:34.905695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.908 [2024-05-15 15:52:34.905729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.908 [2024-05-15 15:52:34.905749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.908 [2024-05-15 15:52:34.915739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.908 [2024-05-15 15:52:34.915774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.908 [2024-05-15 15:52:34.915794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.908 [2024-05-15 15:52:34.925659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.909 [2024-05-15 15:52:34.925694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.909 [2024-05-15 15:52:34.925714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.909 [2024-05-15 15:52:34.935081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.909 [2024-05-15 15:52:34.935116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.909 [2024-05-15 15:52:34.935135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.909 [2024-05-15 15:52:34.940609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.909 [2024-05-15 15:52:34.940643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.909 [2024-05-15 15:52:34.940662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.909 [2024-05-15 15:52:34.950292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.909 [2024-05-15 15:52:34.950326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.909 [2024-05-15 15:52:34.950346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.909 [2024-05-15 15:52:34.960411] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.909 [2024-05-15 15:52:34.960445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.909 [2024-05-15 15:52:34.960464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.909 [2024-05-15 15:52:34.970195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.909 [2024-05-15 15:52:34.970237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.909 [2024-05-15 15:52:34.970258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.909 [2024-05-15 15:52:34.979459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.909 [2024-05-15 15:52:34.979493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.909 [2024-05-15 15:52:34.979513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.909 [2024-05-15 15:52:34.988885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.909 [2024-05-15 15:52:34.988918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.909 [2024-05-15 15:52:34.988937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.909 [2024-05-15 15:52:34.997526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.909 [2024-05-15 15:52:34.997558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.909 [2024-05-15 15:52:34.997577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.909 [2024-05-15 15:52:35.005985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:21.909 [2024-05-15 15:52:35.006021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.909 [2024-05-15 15:52:35.006052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.167 [2024-05-15 15:52:35.014303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.167 [2024-05-15 15:52:35.014339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.167 [2024-05-15 15:52:35.014360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.167 [2024-05-15 15:52:35.022449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.167 [2024-05-15 15:52:35.022483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.167 [2024-05-15 15:52:35.022515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.167 [2024-05-15 15:52:35.030676] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.167 [2024-05-15 15:52:35.030709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.167 [2024-05-15 15:52:35.030728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.167 [2024-05-15 15:52:35.038921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.167 [2024-05-15 15:52:35.038953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.167 [2024-05-15 15:52:35.038972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.167 [2024-05-15 15:52:35.047189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.167 [2024-05-15 15:52:35.047230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.167 [2024-05-15 15:52:35.047250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.167 [2024-05-15 15:52:35.055485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.167 [2024-05-15 15:52:35.055518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.167 [2024-05-15 15:52:35.055537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.063638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.063670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.063688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.071944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.071976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.071995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.080107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.080139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.080159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.088369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.088402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.088421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.096574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.096611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.096631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.104743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.104776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.104798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.113033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.113066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.113086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.121269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.121301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.121321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.129885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.129917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.129937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.138426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.138459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.138479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.146879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.146912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.146931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.155185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.155228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.155250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.163403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.163435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.163460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.171610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.171643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.171662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.180030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.180062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.180092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.188343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.188376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.188394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.196529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.196561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.196580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.204741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.204774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.204792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.212959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.212990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.213009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.221161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.221193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.221212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.229426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.229458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.229479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.237780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.237817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.237837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.246105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.246137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.246156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.254400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.254431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.254452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.168 [2024-05-15 15:52:35.262679] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.168 [2024-05-15 15:52:35.262711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.168 [2024-05-15 15:52:35.262730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.428 [2024-05-15 15:52:35.270935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.428 [2024-05-15 15:52:35.270969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.428 [2024-05-15 15:52:35.270988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.428 [2024-05-15 15:52:35.279207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.428 [2024-05-15 15:52:35.279248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.428 [2024-05-15 15:52:35.279274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.428 [2024-05-15 15:52:35.287516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.428 [2024-05-15 15:52:35.287548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.428 [2024-05-15 15:52:35.287567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.428 [2024-05-15 15:52:35.295748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.428 [2024-05-15 15:52:35.295780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.428 [2024-05-15 15:52:35.295799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.428 [2024-05-15 15:52:35.304035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.428 [2024-05-15 15:52:35.304067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.428 [2024-05-15 15:52:35.304087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.428 [2024-05-15 15:52:35.312361] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.428 [2024-05-15 15:52:35.312393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.428 [2024-05-15 15:52:35.312412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.428 [2024-05-15 15:52:35.320625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.428 [2024-05-15 15:52:35.320658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.428 [2024-05-15 15:52:35.320676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.428 [2024-05-15 15:52:35.328927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.428 [2024-05-15 15:52:35.328958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.428 [2024-05-15 15:52:35.328977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.428 [2024-05-15 15:52:35.337113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.428 [2024-05-15 15:52:35.337145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.428 [2024-05-15 15:52:35.337164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.428 [2024-05-15 15:52:35.345295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.428 [2024-05-15 15:52:35.345328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.428 [2024-05-15 15:52:35.345348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.428 [2024-05-15 15:52:35.353486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.428 [2024-05-15 15:52:35.353517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.429 [2024-05-15 15:52:35.353537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.429 [2024-05-15 15:52:35.361717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.429 [2024-05-15 15:52:35.361748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.429 [2024-05-15 15:52:35.361767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.429 [2024-05-15 15:52:35.370096] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.429 [2024-05-15 15:52:35.370127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.429 [2024-05-15 15:52:35.370146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.429 [2024-05-15 15:52:35.378405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.429 [2024-05-15 15:52:35.378437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.429 [2024-05-15 15:52:35.378462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.429 [2024-05-15 15:52:35.386601] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.429 [2024-05-15 15:52:35.386633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.429 [2024-05-15 15:52:35.386652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.429 [2024-05-15 15:52:35.394820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.429 [2024-05-15 15:52:35.394852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.429 [2024-05-15 15:52:35.394870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.429 [2024-05-15 15:52:35.403031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.429 [2024-05-15 15:52:35.403062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.429 [2024-05-15 15:52:35.403081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.429 [2024-05-15 15:52:35.411333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.429 [2024-05-15 15:52:35.411364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.429 [2024-05-15 15:52:35.411384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.429 [2024-05-15 15:52:35.419563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.429 [2024-05-15 15:52:35.419595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.429 [2024-05-15 15:52:35.419614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.429 [2024-05-15 15:52:35.427682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.429 [2024-05-15 15:52:35.427713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.429 [2024-05-15 15:52:35.427732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.429 [2024-05-15 15:52:35.435976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.429 [2024-05-15 15:52:35.436007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.429 [2024-05-15 15:52:35.436026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.429 [2024-05-15 15:52:35.444372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.429 [2024-05-15 15:52:35.444404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.429 [2024-05-15 15:52:35.444422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.429 [2024-05-15 15:52:35.452607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.429 [2024-05-15 15:52:35.452644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.429 [2024-05-15 15:52:35.452664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.429 [2024-05-15 15:52:35.460802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.429 [2024-05-15 15:52:35.460833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.429 [2024-05-15 15:52:35.460852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.429 [2024-05-15 15:52:35.469028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.429 [2024-05-15 15:52:35.469060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.429 [2024-05-15 15:52:35.469080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.429 [2024-05-15 15:52:35.477370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.429 [2024-05-15 15:52:35.477401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.429 [2024-05-15 15:52:35.477420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.429 [2024-05-15 15:52:35.485814] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.429 [2024-05-15 15:52:35.485847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.429 [2024-05-15 15:52:35.485867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.429 [2024-05-15 15:52:35.494013] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.429 [2024-05-15 15:52:35.494044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.429 [2024-05-15 15:52:35.494063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.429 [2024-05-15 15:52:35.502253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.429 [2024-05-15 15:52:35.502285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.429 [2024-05-15 15:52:35.502305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.429 [2024-05-15 15:52:35.510457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.429 [2024-05-15 15:52:35.510489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.429 [2024-05-15 15:52:35.510508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.429 [2024-05-15 15:52:35.518747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.429 [2024-05-15 15:52:35.518779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.429 [2024-05-15 15:52:35.518799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.429 [2024-05-15 15:52:35.526942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.429 [2024-05-15 15:52:35.526973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.429 [2024-05-15 15:52:35.526993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.688 [2024-05-15 15:52:35.535375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.688 [2024-05-15 15:52:35.535409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.688 [2024-05-15 15:52:35.535428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.688 [2024-05-15 15:52:35.543554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.688 [2024-05-15 15:52:35.543586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.688 [2024-05-15 15:52:35.543605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.551736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.551767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.551787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.559933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.559965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.559985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.568123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.568155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.568175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.576367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.576399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.576419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.584566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.584598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.584617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.592729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.592760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.592786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.600951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.600983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.601003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.609179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.609213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.609242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.617404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.617436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.617456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.625636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.625668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.625687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.633902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.633933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.633952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.642152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.642184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.642203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.650518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.650549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.650568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.658691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.658723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.658743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.666900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.666931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.666951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.675121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.675152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.675171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.683376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.683408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.683427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.691564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.691596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.691615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.699952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.699984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.700004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.708160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.708192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.708212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.716438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.716469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.716489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.725162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.725196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.725225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.733526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.733558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.733584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.741833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.741866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.741885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.750079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.750112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.750131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.758401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.758434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.689 [2024-05-15 15:52:35.758453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.689 [2024-05-15 15:52:35.766655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.689 [2024-05-15 15:52:35.766688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.690 [2024-05-15 15:52:35.766707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.690 [2024-05-15 15:52:35.774796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.690 [2024-05-15 15:52:35.774829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.690 [2024-05-15 15:52:35.774848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.690 [2024-05-15 15:52:35.783017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.690 [2024-05-15 15:52:35.783049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.690 [2024-05-15 15:52:35.783068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.949 [2024-05-15 15:52:35.791317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.949 [2024-05-15 15:52:35.791349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.949 [2024-05-15 15:52:35.791368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.949 [2024-05-15 15:52:35.799534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.949 [2024-05-15 15:52:35.799567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.949 [2024-05-15 15:52:35.799586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.949 [2024-05-15 15:52:35.807741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.949 [2024-05-15 15:52:35.807779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.949 [2024-05-15 15:52:35.807799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.949 [2024-05-15 15:52:35.815916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.949 [2024-05-15 15:52:35.815948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.949 [2024-05-15 15:52:35.815967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.949 [2024-05-15 15:52:35.824171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.949 [2024-05-15 15:52:35.824204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.949 [2024-05-15 15:52:35.824233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.949 [2024-05-15 15:52:35.832475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.949 [2024-05-15 15:52:35.832508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.949 [2024-05-15 15:52:35.832527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.949 [2024-05-15 15:52:35.840692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.949 [2024-05-15 15:52:35.840724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.949 [2024-05-15 15:52:35.840743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.949 [2024-05-15 15:52:35.849108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.949 [2024-05-15 15:52:35.849145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.949 [2024-05-15 15:52:35.849167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.949 [2024-05-15 15:52:35.857395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.949 [2024-05-15 15:52:35.857428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.949 [2024-05-15 15:52:35.857448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.949 [2024-05-15 15:52:35.865584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.949 [2024-05-15 15:52:35.865617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.949 [2024-05-15 15:52:35.865637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.949 [2024-05-15 15:52:35.873712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.949 [2024-05-15 15:52:35.873744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.949 [2024-05-15 15:52:35.873764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.949 [2024-05-15 15:52:35.881976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.949 [2024-05-15 15:52:35.882018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.949 [2024-05-15 15:52:35.882038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.949 [2024-05-15 15:52:35.890378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.949 [2024-05-15 15:52:35.890412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.949 [2024-05-15 15:52:35.890431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.949 [2024-05-15 15:52:35.898624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.949 [2024-05-15 15:52:35.898658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.949 [2024-05-15 15:52:35.898678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.949 [2024-05-15 15:52:35.906847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.949 [2024-05-15 15:52:35.906880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.949 [2024-05-15 15:52:35.906900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.949 [2024-05-15 15:52:35.915099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.949 [2024-05-15 15:52:35.915131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.949 [2024-05-15 15:52:35.915151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.949 [2024-05-15 15:52:35.923489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.949 [2024-05-15 15:52:35.923521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.949 [2024-05-15 15:52:35.923540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.949 [2024-05-15 15:52:35.931825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.949 [2024-05-15 15:52:35.931857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.949 [2024-05-15 15:52:35.931876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.949 [2024-05-15 15:52:35.940126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.949 [2024-05-15 15:52:35.940161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.949 [2024-05-15 15:52:35.940180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.949 [2024-05-15 15:52:35.948478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.949 [2024-05-15 15:52:35.948512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.949 [2024-05-15 15:52:35.948541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.949 [2024-05-15 15:52:35.956822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.949 [2024-05-15 15:52:35.956855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.949 [2024-05-15 15:52:35.956874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.949 [2024-05-15 15:52:35.965033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.949 [2024-05-15 15:52:35.965065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.949 [2024-05-15 15:52:35.965084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.949 [2024-05-15 15:52:35.973409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.949 [2024-05-15 15:52:35.973442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.949 [2024-05-15 15:52:35.973461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.949 [2024-05-15 15:52:35.981633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.949 [2024-05-15 15:52:35.981666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.949 [2024-05-15 15:52:35.981685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.950 [2024-05-15 15:52:35.989940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.950 [2024-05-15 15:52:35.989972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.950 [2024-05-15 15:52:35.989991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.950 [2024-05-15 15:52:35.998282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.950 [2024-05-15 15:52:35.998315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.950 [2024-05-15 15:52:35.998334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.950 [2024-05-15 15:52:36.006533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.950 [2024-05-15 15:52:36.006566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.950 [2024-05-15 15:52:36.006585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.950 [2024-05-15 15:52:36.014819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.950 [2024-05-15 15:52:36.014852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.950 [2024-05-15 15:52:36.014872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.950 [2024-05-15 15:52:36.023516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.950 [2024-05-15 15:52:36.023550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.950 [2024-05-15 15:52:36.023570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.950 [2024-05-15 15:52:36.031800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.950 [2024-05-15 15:52:36.031835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.950 [2024-05-15 15:52:36.031855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.950 [2024-05-15 15:52:36.040084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.950 [2024-05-15 15:52:36.040117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.950 [2024-05-15 15:52:36.040136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.950 [2024-05-15 15:52:36.048418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:22.950 [2024-05-15 15:52:36.048449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.950 [2024-05-15 15:52:36.048468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:23.209 [2024-05-15 15:52:36.056623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.209 [2024-05-15 15:52:36.056655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.209 [2024-05-15 15:52:36.056674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:23.209 [2024-05-15 15:52:36.064738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.209 [2024-05-15 15:52:36.064770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.209 [2024-05-15 15:52:36.064789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:23.209 [2024-05-15 15:52:36.072901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.209 [2024-05-15 15:52:36.072934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.209 [2024-05-15 15:52:36.072953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:23.209 [2024-05-15 15:52:36.081104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.209 [2024-05-15 15:52:36.081137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.209 [2024-05-15 15:52:36.081156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:23.209 [2024-05-15 15:52:36.089343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.209 [2024-05-15 15:52:36.089375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.209 [2024-05-15 15:52:36.089400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:23.209 [2024-05-15 15:52:36.097542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.209 [2024-05-15 15:52:36.097574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.209 [2024-05-15 15:52:36.097593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:23.209 [2024-05-15 15:52:36.105746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.209 [2024-05-15 15:52:36.105779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.209 [2024-05-15 15:52:36.105799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:23.209 [2024-05-15 15:52:36.114163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.209 [2024-05-15 15:52:36.114195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.209 [2024-05-15 15:52:36.114221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:23.209 [2024-05-15 15:52:36.122449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.209 [2024-05-15 15:52:36.122481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.209 [2024-05-15 15:52:36.122501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:23.209 [2024-05-15 15:52:36.130816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.209 [2024-05-15 15:52:36.130849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.210 [2024-05-15 15:52:36.130869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:23.210 [2024-05-15 15:52:36.139210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.210 [2024-05-15 15:52:36.139251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.210 [2024-05-15 15:52:36.139270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:23.210 [2024-05-15 15:52:36.147588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.210 [2024-05-15 15:52:36.147619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.210 [2024-05-15 15:52:36.147639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:23.210 [2024-05-15 15:52:36.155836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.210 [2024-05-15 15:52:36.155868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.210 [2024-05-15 15:52:36.155888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:23.210 [2024-05-15 15:52:36.164059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.210 [2024-05-15 15:52:36.164096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.210 [2024-05-15 15:52:36.164116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:23.210 [2024-05-15 15:52:36.172352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.210 [2024-05-15 15:52:36.172385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.210 [2024-05-15 15:52:36.172404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:23.210 [2024-05-15 15:52:36.180689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.210 [2024-05-15 15:52:36.180723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.210 [2024-05-15 15:52:36.180743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:23.210 [2024-05-15 15:52:36.188874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.210 [2024-05-15 15:52:36.188905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.210 [2024-05-15 15:52:36.188924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:23.210 [2024-05-15 15:52:36.197078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.210 [2024-05-15 15:52:36.197110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.210 [2024-05-15 15:52:36.197128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:23.210 [2024-05-15 15:52:36.205303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.210 [2024-05-15 15:52:36.205334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.210 [2024-05-15 15:52:36.205353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:23.210 [2024-05-15 15:52:36.213528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.210 [2024-05-15 15:52:36.213559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.210 [2024-05-15 15:52:36.213579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:23.210 [2024-05-15 15:52:36.221807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.210 [2024-05-15 15:52:36.221840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.210 [2024-05-15 15:52:36.221859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:23.210 [2024-05-15 15:52:36.230003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.210 [2024-05-15 15:52:36.230035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.210 [2024-05-15 15:52:36.230054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:23.210 [2024-05-15 15:52:36.238332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.210 [2024-05-15 15:52:36.238364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.210 [2024-05-15 15:52:36.238384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:23.210 [2024-05-15 15:52:36.246631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.210 [2024-05-15 15:52:36.246662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.210 [2024-05-15 15:52:36.246681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:23.210 [2024-05-15 15:52:36.256350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.210 [2024-05-15 15:52:36.256384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.210 [2024-05-15 15:52:36.256404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:23.210 [2024-05-15 15:52:36.266587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.210 [2024-05-15 15:52:36.266621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.210 [2024-05-15 15:52:36.266641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:23.210 [2024-05-15 15:52:36.276831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.210 [2024-05-15 15:52:36.276865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.210 [2024-05-15 15:52:36.276885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:23.210 [2024-05-15 15:52:36.287285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.210 [2024-05-15 15:52:36.287333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.210 [2024-05-15 15:52:36.287355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:23.210 [2024-05-15 15:52:36.297681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.210 [2024-05-15 15:52:36.297721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.210 [2024-05-15 15:52:36.297741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:23.210 [2024-05-15 15:52:36.307677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.210 [2024-05-15 15:52:36.307712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.210 [2024-05-15 15:52:36.307733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:23.469 [2024-05-15 15:52:36.316067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.469 [2024-05-15 15:52:36.316100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.469 [2024-05-15 15:52:36.316130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:23.469 [2024-05-15 15:52:36.324482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.469 [2024-05-15 15:52:36.324514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.469 [2024-05-15 15:52:36.324534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:23.469 [2024-05-15 15:52:36.332862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.469 [2024-05-15 15:52:36.332894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.469 [2024-05-15 15:52:36.332914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:23.469 [2024-05-15 15:52:36.341126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.469 [2024-05-15 15:52:36.341158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.469 [2024-05-15 15:52:36.341178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:23.469 [2024-05-15 15:52:36.349516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.469 [2024-05-15 15:52:36.349548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.469 [2024-05-15 15:52:36.349567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:23.469 [2024-05-15 15:52:36.357816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.469 [2024-05-15 15:52:36.357848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.470 [2024-05-15 15:52:36.357867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:23.470 [2024-05-15 15:52:36.366329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.470 [2024-05-15 15:52:36.366362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.470 [2024-05-15 15:52:36.366381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:23.470 [2024-05-15 15:52:36.375420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.470 [2024-05-15 15:52:36.375451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.470 [2024-05-15 15:52:36.375473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:23.470 [2024-05-15 15:52:36.383666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.470 [2024-05-15 15:52:36.383697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.470 [2024-05-15 15:52:36.383716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:23.470 [2024-05-15 15:52:36.391880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.470 [2024-05-15 15:52:36.391916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.470 [2024-05-15 15:52:36.391936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:23.470 [2024-05-15 15:52:36.400048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.470 [2024-05-15 15:52:36.400079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.470 [2024-05-15 15:52:36.400098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:23.470 [2024-05-15 15:52:36.408306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.470 [2024-05-15 15:52:36.408337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.470 [2024-05-15 15:52:36.408356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:23.470 [2024-05-15 15:52:36.416568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.470 [2024-05-15 15:52:36.416600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.470 [2024-05-15 15:52:36.416618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:23.470 [2024-05-15 15:52:36.425099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.470 [2024-05-15 15:52:36.425132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.470 [2024-05-15 15:52:36.425158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:23.470 [2024-05-15 15:52:36.435140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.470 [2024-05-15 15:52:36.435174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.470 [2024-05-15 15:52:36.435193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:23.470 [2024-05-15 15:52:36.444726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.470 [2024-05-15 15:52:36.444759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.470 [2024-05-15 15:52:36.444779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:23.470 [2024-05-15 15:52:36.454184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.470 [2024-05-15 15:52:36.454226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.470 [2024-05-15 15:52:36.454249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:23.470 [2024-05-15 15:52:36.464575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.470 [2024-05-15 15:52:36.464609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.470 [2024-05-15 15:52:36.464629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:23.470 [2024-05-15 15:52:36.475121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.470 [2024-05-15 15:52:36.475155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.470 [2024-05-15 15:52:36.475175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:23.470 [2024-05-15 15:52:36.485771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.470 [2024-05-15 15:52:36.485806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.470 [2024-05-15 15:52:36.485826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:23.470 [2024-05-15 15:52:36.495971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.470 [2024-05-15 15:52:36.496006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.470 [2024-05-15 15:52:36.496026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:23.470 [2024-05-15 15:52:36.505769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.470 [2024-05-15 15:52:36.505804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.470 [2024-05-15 15:52:36.505823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:23.470 [2024-05-15 15:52:36.516040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.470 [2024-05-15 15:52:36.516075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.470 [2024-05-15 15:52:36.516094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:23.470 [2024-05-15 15:52:36.526746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.470 [2024-05-15 15:52:36.526780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.470 [2024-05-15 15:52:36.526800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:23.470 [2024-05-15 15:52:36.536968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.470 [2024-05-15 15:52:36.537002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.470 [2024-05-15 15:52:36.537021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:23.470 [2024-05-15 15:52:36.547816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.470 [2024-05-15 15:52:36.547851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.470 [2024-05-15 15:52:36.547876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:23.470 [2024-05-15 15:52:36.556600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.470 [2024-05-15 15:52:36.556640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.470 [2024-05-15 15:52:36.556661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:23.470 [2024-05-15 15:52:36.566802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd9000) 00:34:23.470 [2024-05-15 15:52:36.566837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.470 [2024-05-15 15:52:36.566857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:23.728 00:34:23.728 Latency(us) 00:34:23.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:23.728 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:23.728 nvme0n1 : 2.00 3581.57 447.70 0.00 0.00 4463.17 1274.31 11262.48 00:34:23.728 =================================================================================================================== 00:34:23.728 Total : 3581.57 447.70 0.00 0.00 4463.17 1274.31 11262.48 00:34:23.728 0 00:34:23.728 15:52:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:23.728 15:52:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:23.728 15:52:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:23.728 15:52:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:23.728 | .driver_specific 00:34:23.728 | .nvme_error 00:34:23.728 | .status_code 00:34:23.728 | .command_transient_transport_error' 00:34:23.728 15:52:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 231 > 0 )) 00:34:23.728 15:52:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1471725 00:34:23.728 15:52:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1471725 ']' 00:34:23.728 15:52:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1471725 00:34:23.987 15:52:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:34:23.987 15:52:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:23.987 15:52:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1471725 00:34:23.987 15:52:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:23.987 15:52:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:23.987 15:52:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1471725' 00:34:23.987 killing process with pid 1471725 00:34:23.987 15:52:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1471725 00:34:23.987 Received shutdown signal, test time was about 2.000000 seconds 00:34:23.987 00:34:23.987 Latency(us) 00:34:23.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:23.987 =================================================================================================================== 00:34:23.987 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:23.987 15:52:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1471725 00:34:23.987 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:34:23.987 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:23.987 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:23.987 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:23.987 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:23.987 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1472158 00:34:23.987 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:34:23.987 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1472158 /var/tmp/bperf.sock 00:34:23.987 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1472158 ']' 00:34:23.987 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:23.987 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:23.987 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:23.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:23.987 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:23.987 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:24.245 [2024-05-15 15:52:37.110148] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:34:24.245 [2024-05-15 15:52:37.110255] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472158 ] 00:34:24.245 EAL: No free 2048 kB hugepages reported on node 1 00:34:24.245 [2024-05-15 15:52:37.149765] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:24.245 [2024-05-15 15:52:37.187582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:24.245 [2024-05-15 15:52:37.279904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:24.504 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:24.504 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:34:24.504 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:24.504 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:24.762 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:24.762 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.762 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:24.762 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.762 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:24.762 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:25.021 nvme0n1 00:34:25.021 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:25.021 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.021 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:25.021 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.021 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:25.021 15:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:25.021 Running I/O for 2 seconds... 00:34:25.021 [2024-05-15 15:52:38.108188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190ee5c8 00:34:25.021 [2024-05-15 15:52:38.109233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.021 [2024-05-15 15:52:38.109302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:25.021 [2024-05-15 15:52:38.121251] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190ef6a8 00:34:25.280 [2024-05-15 15:52:38.122417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.280 [2024-05-15 15:52:38.122448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:25.280 [2024-05-15 15:52:38.134634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190fbcf0 00:34:25.280 [2024-05-15 15:52:38.135870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.280 [2024-05-15 15:52:38.135903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:25.280 [2024-05-15 15:52:38.148020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190eff18 00:34:25.280 [2024-05-15 15:52:38.149472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.280 [2024-05-15 15:52:38.149502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:25.280 [2024-05-15 15:52:38.159858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e23b8 00:34:25.280 [2024-05-15 15:52:38.161214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.280 [2024-05-15 15:52:38.161253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:25.280 [2024-05-15 15:52:38.171596] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e84c0 00:34:25.280 [2024-05-15 15:52:38.172538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.280 [2024-05-15 15:52:38.172571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:25.280 [2024-05-15 15:52:38.184458] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f2510 00:34:25.280 [2024-05-15 15:52:38.185111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.280 [2024-05-15 15:52:38.185144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:25.280 [2024-05-15 15:52:38.198839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190fac10 00:34:25.280 [2024-05-15 15:52:38.200597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.280 [2024-05-15 15:52:38.200629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:25.280 [2024-05-15 15:52:38.212059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f1430 00:34:25.280 [2024-05-15 15:52:38.213943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.281 [2024-05-15 15:52:38.213975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:25.281 [2024-05-15 15:52:38.223772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190ebb98 00:34:25.281 [2024-05-15 15:52:38.225143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.281 [2024-05-15 15:52:38.225175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:25.281 [2024-05-15 15:52:38.235261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f7100 00:34:25.281 [2024-05-15 15:52:38.237134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.281 [2024-05-15 15:52:38.237166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:25.281 [2024-05-15 15:52:38.246953] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e01f8 00:34:25.281 [2024-05-15 15:52:38.247820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.281 [2024-05-15 15:52:38.247851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:25.281 [2024-05-15 15:52:38.259861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f2510 00:34:25.281 [2024-05-15 15:52:38.260876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.281 [2024-05-15 15:52:38.260908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:25.281 [2024-05-15 15:52:38.271833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190fc128 00:34:25.281 [2024-05-15 15:52:38.272849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.281 [2024-05-15 15:52:38.272881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:25.281 [2024-05-15 15:52:38.285903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190fb8b8 00:34:25.281 [2024-05-15 15:52:38.287117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.281 [2024-05-15 15:52:38.287149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:25.281 [2024-05-15 15:52:38.298917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f0bc0 00:34:25.281 [2024-05-15 15:52:38.300321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.281 [2024-05-15 15:52:38.300350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:25.281 [2024-05-15 15:52:38.310804] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190ed0b0 00:34:25.281 [2024-05-15 15:52:38.312176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.281 [2024-05-15 15:52:38.312223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:25.281 [2024-05-15 15:52:38.322522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e8088 00:34:25.281 [2024-05-15 15:52:38.323397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.281 [2024-05-15 15:52:38.323425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:25.281 [2024-05-15 15:52:38.335096] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190ddc00 00:34:25.281 [2024-05-15 15:52:38.335766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.281 [2024-05-15 15:52:38.335795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:25.281 [2024-05-15 15:52:38.348261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190eb760 00:34:25.281 [2024-05-15 15:52:38.349075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.281 [2024-05-15 15:52:38.349103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:25.281 [2024-05-15 15:52:38.361466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e99d8 00:34:25.281 [2024-05-15 15:52:38.362485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.281 [2024-05-15 15:52:38.362514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:25.281 [2024-05-15 15:52:38.375742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e5ec8 00:34:25.281 [2024-05-15 15:52:38.377638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.281 [2024-05-15 15:52:38.377670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:25.539 [2024-05-15 15:52:38.384843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f2d80 00:34:25.539 [2024-05-15 15:52:38.385779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.539 [2024-05-15 15:52:38.385810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:25.539 [2024-05-15 15:52:38.396773] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190fef90 00:34:25.539 [2024-05-15 15:52:38.397627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.539 [2024-05-15 15:52:38.397658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:25.539 [2024-05-15 15:52:38.409881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e9e10 00:34:25.539 [2024-05-15 15:52:38.410884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.539 [2024-05-15 15:52:38.410916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:25.539 [2024-05-15 15:52:38.422923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190eaef0 00:34:25.539 [2024-05-15 15:52:38.424121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.539 [2024-05-15 15:52:38.424153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:25.539 [2024-05-15 15:52:38.436820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f8a50 00:34:25.539 [2024-05-15 15:52:38.438211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.540 [2024-05-15 15:52:38.438263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:25.540 [2024-05-15 15:52:38.449835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190fb480 00:34:25.540 [2024-05-15 15:52:38.451481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.540 [2024-05-15 15:52:38.451510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:25.540 [2024-05-15 15:52:38.460119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e7c50 00:34:25.540 [2024-05-15 15:52:38.460991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.540 [2024-05-15 15:52:38.461022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:25.540 [2024-05-15 15:52:38.473319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190de470 00:34:25.540 [2024-05-15 15:52:38.474353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.540 [2024-05-15 15:52:38.474384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:25.540 [2024-05-15 15:52:38.485210] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190ebfd0 00:34:25.540 [2024-05-15 15:52:38.486242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.540 [2024-05-15 15:52:38.486274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:25.540 [2024-05-15 15:52:38.498436] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f31b8 00:34:25.540 [2024-05-15 15:52:38.499627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.540 [2024-05-15 15:52:38.499658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:25.540 [2024-05-15 15:52:38.511604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e9168 00:34:25.540 [2024-05-15 15:52:38.512952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.540 [2024-05-15 15:52:38.512982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:25.540 [2024-05-15 15:52:38.523353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f0788 00:34:25.540 [2024-05-15 15:52:38.524183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.540 [2024-05-15 15:52:38.524232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:25.540 [2024-05-15 15:52:38.536110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190de038 00:34:25.540 [2024-05-15 15:52:38.536768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.540 [2024-05-15 15:52:38.536799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:25.540 [2024-05-15 15:52:38.549288] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190fe2e8 00:34:25.540 [2024-05-15 15:52:38.550092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.540 [2024-05-15 15:52:38.550125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:25.540 [2024-05-15 15:52:38.563699] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190eff18 00:34:25.540 [2024-05-15 15:52:38.565569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.540 [2024-05-15 15:52:38.565601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:25.540 [2024-05-15 15:52:38.575442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f35f0 00:34:25.540 [2024-05-15 15:52:38.576796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.540 [2024-05-15 15:52:38.576827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:25.540 [2024-05-15 15:52:38.586910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e6738 00:34:25.540 [2024-05-15 15:52:38.588737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.540 [2024-05-15 15:52:38.588768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:25.540 [2024-05-15 15:52:38.598562] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f57b0 00:34:25.540 [2024-05-15 15:52:38.599421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.540 [2024-05-15 15:52:38.599452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:25.540 [2024-05-15 15:52:38.611521] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190eff18 00:34:25.540 [2024-05-15 15:52:38.612545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.540 [2024-05-15 15:52:38.612578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:25.540 [2024-05-15 15:52:38.623432] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190ff3c8 00:34:25.540 [2024-05-15 15:52:38.624446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.540 [2024-05-15 15:52:38.624479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:25.540 [2024-05-15 15:52:38.636674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e4578 00:34:25.540 [2024-05-15 15:52:38.637975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.540 [2024-05-15 15:52:38.638013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:25.798 [2024-05-15 15:52:38.651174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190fd208 00:34:25.798 [2024-05-15 15:52:38.652572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.798 [2024-05-15 15:52:38.652604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:25.798 [2024-05-15 15:52:38.664213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f6458 00:34:25.798 [2024-05-15 15:52:38.665753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.798 [2024-05-15 15:52:38.665784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:25.798 [2024-05-15 15:52:38.676134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f0350 00:34:25.798 [2024-05-15 15:52:38.677670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.798 [2024-05-15 15:52:38.677701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:25.798 [2024-05-15 15:52:38.689329] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f8a50 00:34:25.798 [2024-05-15 15:52:38.691018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.798 [2024-05-15 15:52:38.691049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:25.798 [2024-05-15 15:52:38.701062] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f0bc0 00:34:25.798 [2024-05-15 15:52:38.702246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.798 [2024-05-15 15:52:38.702277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:25.798 [2024-05-15 15:52:38.713782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f2510 00:34:25.798 [2024-05-15 15:52:38.714763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.798 [2024-05-15 15:52:38.714794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:25.798 [2024-05-15 15:52:38.725659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190ec408 00:34:25.798 [2024-05-15 15:52:38.727508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.798 [2024-05-15 15:52:38.727538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:25.799 [2024-05-15 15:52:38.737305] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f2510 00:34:25.799 [2024-05-15 15:52:38.738164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.799 [2024-05-15 15:52:38.738195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:25.799 [2024-05-15 15:52:38.748998] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e7c50 00:34:25.799 [2024-05-15 15:52:38.749844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.799 [2024-05-15 15:52:38.749875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:25.799 [2024-05-15 15:52:38.763002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190eea00 00:34:25.799 [2024-05-15 15:52:38.764051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.799 [2024-05-15 15:52:38.764082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:25.799 [2024-05-15 15:52:38.775978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190eb760 00:34:25.799 [2024-05-15 15:52:38.777183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.799 [2024-05-15 15:52:38.777233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:25.799 [2024-05-15 15:52:38.787911] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f20d8 00:34:25.799 [2024-05-15 15:52:38.789095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.799 [2024-05-15 15:52:38.789126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:25.799 [2024-05-15 15:52:38.801919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e1710 00:34:25.799 [2024-05-15 15:52:38.803295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.799 [2024-05-15 15:52:38.803326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:25.799 [2024-05-15 15:52:38.814876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e0630 00:34:25.799 [2024-05-15 15:52:38.816423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.799 [2024-05-15 15:52:38.816453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:25.799 [2024-05-15 15:52:38.826751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f3e60 00:34:25.799 [2024-05-15 15:52:38.828271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.799 [2024-05-15 15:52:38.828301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:25.799 [2024-05-15 15:52:38.838505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f0ff8 00:34:25.799 [2024-05-15 15:52:38.839515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.799 [2024-05-15 15:52:38.839548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:25.799 [2024-05-15 15:52:38.851245] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e4578 00:34:25.799 [2024-05-15 15:52:38.852047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.799 [2024-05-15 15:52:38.852078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:25.799 [2024-05-15 15:52:38.865637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e5a90 00:34:25.799 [2024-05-15 15:52:38.867516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.799 [2024-05-15 15:52:38.867547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:25.799 [2024-05-15 15:52:38.878834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190df988 00:34:25.799 [2024-05-15 15:52:38.880878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.799 [2024-05-15 15:52:38.880909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:25.799 [2024-05-15 15:52:38.887760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e7c50 00:34:25.799 [2024-05-15 15:52:38.888635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:25.799 [2024-05-15 15:52:38.888667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:26.058 [2024-05-15 15:52:38.899972] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e49b0 00:34:26.058 [2024-05-15 15:52:38.900827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.058 [2024-05-15 15:52:38.900858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:26.058 [2024-05-15 15:52:38.914134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190ef6a8 00:34:26.058 [2024-05-15 15:52:38.915166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.058 [2024-05-15 15:52:38.915197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:26.059 [2024-05-15 15:52:38.927103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190ebb98 00:34:26.059 [2024-05-15 15:52:38.928300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.059 [2024-05-15 15:52:38.928331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:26.059 [2024-05-15 15:52:38.939018] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190fdeb0 00:34:26.059 [2024-05-15 15:52:38.940203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.059 [2024-05-15 15:52:38.940240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:26.059 [2024-05-15 15:52:38.953022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190fac10 00:34:26.059 [2024-05-15 15:52:38.954393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.059 [2024-05-15 15:52:38.954424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:26.059 [2024-05-15 15:52:38.965995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e3498 00:34:26.059 [2024-05-15 15:52:38.967543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.059 [2024-05-15 15:52:38.967573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:26.059 [2024-05-15 15:52:38.977915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f4298 00:34:26.059 [2024-05-15 15:52:38.979450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.059 [2024-05-15 15:52:38.979480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:26.059 [2024-05-15 15:52:38.989689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190df118 00:34:26.059 [2024-05-15 15:52:38.990704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.059 [2024-05-15 15:52:38.990735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:26.059 [2024-05-15 15:52:39.002427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f20d8 00:34:26.059 [2024-05-15 15:52:39.003241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.059 [2024-05-15 15:52:39.003272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:26.059 [2024-05-15 15:52:39.016800] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190ed0b0 00:34:26.059 [2024-05-15 15:52:39.018688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.059 [2024-05-15 15:52:39.018718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:26.059 [2024-05-15 15:52:39.029967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190eee38 00:34:26.059 [2024-05-15 15:52:39.031999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.059 [2024-05-15 15:52:39.032029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:26.059 [2024-05-15 15:52:39.038903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e49b0 00:34:26.059 [2024-05-15 15:52:39.039759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.059 [2024-05-15 15:52:39.039790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:26.059 [2024-05-15 15:52:39.050798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f2510 00:34:26.059 [2024-05-15 15:52:39.051641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.059 [2024-05-15 15:52:39.051671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:26.059 [2024-05-15 15:52:39.063983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190ee190 00:34:26.059 [2024-05-15 15:52:39.065013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.059 [2024-05-15 15:52:39.065044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:26.059 [2024-05-15 15:52:39.077183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f31b8 00:34:26.059 [2024-05-15 15:52:39.078381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.059 [2024-05-15 15:52:39.078418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:26.059 [2024-05-15 15:52:39.090368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f5378 00:34:26.059 [2024-05-15 15:52:39.091721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.059 [2024-05-15 15:52:39.091752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:26.059 [2024-05-15 15:52:39.102127] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e5220 00:34:26.059 [2024-05-15 15:52:39.102966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.059 [2024-05-15 15:52:39.102998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:26.059 [2024-05-15 15:52:39.114863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190df988 00:34:26.059 [2024-05-15 15:52:39.115511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.059 [2024-05-15 15:52:39.115542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:26.059 [2024-05-15 15:52:39.129272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e0630 00:34:26.059 [2024-05-15 15:52:39.130960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.059 [2024-05-15 15:52:39.130991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:26.059 [2024-05-15 15:52:39.141175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190feb58 00:34:26.059 [2024-05-15 15:52:39.142380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.059 [2024-05-15 15:52:39.142412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:26.059 [2024-05-15 15:52:39.153908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190fac10 00:34:26.059 [2024-05-15 15:52:39.154889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.059 [2024-05-15 15:52:39.154920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:26.319 [2024-05-15 15:52:39.166281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e6300 00:34:26.319 [2024-05-15 15:52:39.168106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.319 [2024-05-15 15:52:39.168137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:26.319 [2024-05-15 15:52:39.177090] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e7c50 00:34:26.319 [2024-05-15 15:52:39.177943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.319 [2024-05-15 15:52:39.177973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:26.319 [2024-05-15 15:52:39.190295] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e9168 00:34:26.319 [2024-05-15 15:52:39.191314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.319 [2024-05-15 15:52:39.191345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:26.319 [2024-05-15 15:52:39.204310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f2510 00:34:26.319 [2024-05-15 15:52:39.205507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.319 [2024-05-15 15:52:39.205538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:26.319 [2024-05-15 15:52:39.217277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190eb328 00:34:26.319 [2024-05-15 15:52:39.218634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.319 [2024-05-15 15:52:39.218665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:26.319 [2024-05-15 15:52:39.229158] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e5a90 00:34:26.319 [2024-05-15 15:52:39.230519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.319 [2024-05-15 15:52:39.230549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:26.319 [2024-05-15 15:52:39.240919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f4298 00:34:26.319 [2024-05-15 15:52:39.241757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.319 [2024-05-15 15:52:39.241787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:26.319 [2024-05-15 15:52:39.253644] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e3d08 00:34:26.319 [2024-05-15 15:52:39.254283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.319 [2024-05-15 15:52:39.254313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:26.319 [2024-05-15 15:52:39.266806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f35f0 00:34:26.319 [2024-05-15 15:52:39.267618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.319 [2024-05-15 15:52:39.267648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:26.319 [2024-05-15 15:52:39.281180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190eff18 00:34:26.319 [2024-05-15 15:52:39.283048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.319 [2024-05-15 15:52:39.283079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:26.319 [2024-05-15 15:52:39.292934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190ed4e8 00:34:26.319 [2024-05-15 15:52:39.294281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.319 [2024-05-15 15:52:39.294313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:26.319 [2024-05-15 15:52:39.304401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f1ca0 00:34:26.319 [2024-05-15 15:52:39.305740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.319 [2024-05-15 15:52:39.305769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:26.319 [2024-05-15 15:52:39.317548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190efae0 00:34:26.319 [2024-05-15 15:52:39.319053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.319 [2024-05-15 15:52:39.319084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:26.319 [2024-05-15 15:52:39.329297] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f2948 00:34:26.319 [2024-05-15 15:52:39.330294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.319 [2024-05-15 15:52:39.330325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:26.319 [2024-05-15 15:52:39.342040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190fb048 00:34:26.319 [2024-05-15 15:52:39.342852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.319 [2024-05-15 15:52:39.342883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:26.319 [2024-05-15 15:52:39.355243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e5658 00:34:26.319 [2024-05-15 15:52:39.356225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.319 [2024-05-15 15:52:39.356268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:26.319 [2024-05-15 15:52:39.369663] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e6b70 00:34:26.319 [2024-05-15 15:52:39.371689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.319 [2024-05-15 15:52:39.371720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:26.320 [2024-05-15 15:52:39.378589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e95a0 00:34:26.320 [2024-05-15 15:52:39.379432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.320 [2024-05-15 15:52:39.379463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:26.320 [2024-05-15 15:52:39.392918] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f3a28 00:34:26.320 [2024-05-15 15:52:39.394429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.320 [2024-05-15 15:52:39.394460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:26.320 [2024-05-15 15:52:39.406105] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190ed0b0 00:34:26.320 [2024-05-15 15:52:39.407798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.320 [2024-05-15 15:52:39.407834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:26.320 [2024-05-15 15:52:39.419554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f6458 00:34:26.579 [2024-05-15 15:52:39.421505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.579 [2024-05-15 15:52:39.421537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:26.579 [2024-05-15 15:52:39.431568] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f4298 00:34:26.579 [2024-05-15 15:52:39.432901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.579 [2024-05-15 15:52:39.432940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:26.579 [2024-05-15 15:52:39.443045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e0630 00:34:26.579 [2024-05-15 15:52:39.444845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.579 [2024-05-15 15:52:39.444879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:26.579 [2024-05-15 15:52:39.454692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f6020 00:34:26.579 [2024-05-15 15:52:39.455539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.579 [2024-05-15 15:52:39.455570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:26.579 [2024-05-15 15:52:39.467698] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f9b30 00:34:26.579 [2024-05-15 15:52:39.468700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.579 [2024-05-15 15:52:39.468731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:26.579 [2024-05-15 15:52:39.479713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190ee190 00:34:26.579 [2024-05-15 15:52:39.480707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.579 [2024-05-15 15:52:39.480738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:26.579 [2024-05-15 15:52:39.492906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190eee38 00:34:26.579 [2024-05-15 15:52:39.494064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.579 [2024-05-15 15:52:39.494095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:26.579 [2024-05-15 15:52:39.506913] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190fef90 00:34:26.579 [2024-05-15 15:52:39.508283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.579 [2024-05-15 15:52:39.508313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:26.579 [2024-05-15 15:52:39.519898] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e5220 00:34:26.579 [2024-05-15 15:52:39.521429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.579 [2024-05-15 15:52:39.521460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:26.579 [2024-05-15 15:52:39.531805] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190ef270 00:34:26.579 [2024-05-15 15:52:39.533301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.579 [2024-05-15 15:52:39.533332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:26.579 [2024-05-15 15:52:39.545021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190eff18 00:34:26.579 [2024-05-15 15:52:39.546714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.579 [2024-05-15 15:52:39.546745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:26.579 [2024-05-15 15:52:39.558179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f0788 00:34:26.579 [2024-05-15 15:52:39.560023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.579 [2024-05-15 15:52:39.560055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:26.579 [2024-05-15 15:52:39.569918] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f3e60 00:34:26.579 [2024-05-15 15:52:39.571248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.579 [2024-05-15 15:52:39.571280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:26.579 [2024-05-15 15:52:39.581363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f1430 00:34:26.579 [2024-05-15 15:52:39.583178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.579 [2024-05-15 15:52:39.583209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:26.579 [2024-05-15 15:52:39.594514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e38d0 00:34:26.579 [2024-05-15 15:52:39.596526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.579 [2024-05-15 15:52:39.596557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:26.579 [2024-05-15 15:52:39.605339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f2d80 00:34:26.579 [2024-05-15 15:52:39.606341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.579 [2024-05-15 15:52:39.606374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:26.579 [2024-05-15 15:52:39.618524] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e9168 00:34:26.579 [2024-05-15 15:52:39.619683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.579 [2024-05-15 15:52:39.619715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:26.579 [2024-05-15 15:52:39.631677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e5658 00:34:26.579 [2024-05-15 15:52:39.633007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.579 [2024-05-15 15:52:39.633040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:26.579 [2024-05-15 15:52:39.645702] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e73e0 00:34:26.579 [2024-05-15 15:52:39.647230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.579 [2024-05-15 15:52:39.647262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:26.579 [2024-05-15 15:52:39.657420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190ef6a8 00:34:26.579 [2024-05-15 15:52:39.658916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.579 [2024-05-15 15:52:39.658947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:26.579 [2024-05-15 15:52:39.669160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f81e0 00:34:26.579 [2024-05-15 15:52:39.670149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.579 [2024-05-15 15:52:39.670180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:26.838 [2024-05-15 15:52:39.682186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190eee38 00:34:26.838 [2024-05-15 15:52:39.683025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.838 [2024-05-15 15:52:39.683056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:26.838 [2024-05-15 15:52:39.695541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f4b08 00:34:26.838 [2024-05-15 15:52:39.696504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.838 [2024-05-15 15:52:39.696537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:26.838 [2024-05-15 15:52:39.707433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190fdeb0 00:34:26.838 [2024-05-15 15:52:39.709261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.838 [2024-05-15 15:52:39.709292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:26.839 [2024-05-15 15:52:39.720623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190eff18 00:34:26.839 [2024-05-15 15:52:39.722602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.839 [2024-05-15 15:52:39.722633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:26.839 [2024-05-15 15:52:39.732270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e23b8 00:34:26.839 [2024-05-15 15:52:39.733268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.839 [2024-05-15 15:52:39.733307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:26.839 [2024-05-15 15:52:39.745282] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e38d0 00:34:26.839 [2024-05-15 15:52:39.746463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.839 [2024-05-15 15:52:39.746493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:26.839 [2024-05-15 15:52:39.757165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f0bc0 00:34:26.839 [2024-05-15 15:52:39.758334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.839 [2024-05-15 15:52:39.758364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:26.839 [2024-05-15 15:52:39.771154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f81e0 00:34:26.839 [2024-05-15 15:52:39.772507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.839 [2024-05-15 15:52:39.772539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:26.839 [2024-05-15 15:52:39.784118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190de8a8 00:34:26.839 [2024-05-15 15:52:39.785635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.839 [2024-05-15 15:52:39.785667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:26.839 [2024-05-15 15:52:39.796046] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190ebfd0 00:34:26.839 [2024-05-15 15:52:39.797555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.839 [2024-05-15 15:52:39.797586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:26.839 [2024-05-15 15:52:39.807836] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f7970 00:34:26.839 [2024-05-15 15:52:39.808834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.839 [2024-05-15 15:52:39.808865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:26.839 [2024-05-15 15:52:39.821810] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190fd208 00:34:26.839 [2024-05-15 15:52:39.823486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.839 [2024-05-15 15:52:39.823526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:26.839 [2024-05-15 15:52:39.833571] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f92c0 00:34:26.839 [2024-05-15 15:52:39.834731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.839 [2024-05-15 15:52:39.834762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:26.839 [2024-05-15 15:52:39.846317] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f6458 00:34:26.839 [2024-05-15 15:52:39.847296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.839 [2024-05-15 15:52:39.847338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:26.839 [2024-05-15 15:52:39.860728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f0ff8 00:34:26.839 [2024-05-15 15:52:39.862748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.839 [2024-05-15 15:52:39.862779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:26.839 [2024-05-15 15:52:39.869672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e7818 00:34:26.839 [2024-05-15 15:52:39.870504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.839 [2024-05-15 15:52:39.870535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:26.839 [2024-05-15 15:52:39.883937] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190fbcf0 00:34:26.839 [2024-05-15 15:52:39.885955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.839 [2024-05-15 15:52:39.885986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:26.839 [2024-05-15 15:52:39.895600] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190fb8b8 00:34:26.839 [2024-05-15 15:52:39.896608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.839 [2024-05-15 15:52:39.896639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:26.839 [2024-05-15 15:52:39.909709] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e0630 00:34:26.839 [2024-05-15 15:52:39.911385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.839 [2024-05-15 15:52:39.911415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:26.839 [2024-05-15 15:52:39.921447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e4de8 00:34:26.839 [2024-05-15 15:52:39.922595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.839 [2024-05-15 15:52:39.922627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:26.839 [2024-05-15 15:52:39.934162] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f9f68 00:34:26.839 [2024-05-15 15:52:39.935118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.839 [2024-05-15 15:52:39.935149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:27.098 [2024-05-15 15:52:39.947802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190f2948 00:34:27.098 [2024-05-15 15:52:39.948930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:27.098 [2024-05-15 15:52:39.948961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:27.098 [2024-05-15 15:52:39.962195] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190df118 00:34:27.098 [2024-05-15 15:52:39.964393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:27.098 [2024-05-15 15:52:39.964424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:27.098 [2024-05-15 15:52:39.971122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190ddc00 00:34:27.098 [2024-05-15 15:52:39.972119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:27.098 [2024-05-15 15:52:39.972150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:27.098 [2024-05-15 15:52:39.983015] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190edd58 00:34:27.098 [2024-05-15 15:52:39.983995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:27.098 [2024-05-15 15:52:39.984026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:27.098 [2024-05-15 15:52:39.997047] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190fc128 00:34:27.098 [2024-05-15 15:52:39.998223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:27.098 [2024-05-15 15:52:39.998254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:27.098 [2024-05-15 15:52:40.010076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190ec408 00:34:27.098 [2024-05-15 15:52:40.011269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:27.098 [2024-05-15 15:52:40.011310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:27.098 [2024-05-15 15:52:40.023570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190ea680 00:34:27.098 [2024-05-15 15:52:40.024900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:27.098 [2024-05-15 15:52:40.024935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:27.098 [2024-05-15 15:52:40.037689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e5a90 00:34:27.098 [2024-05-15 15:52:40.039223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:27.098 [2024-05-15 15:52:40.039258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.098 [2024-05-15 15:52:40.050761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190ea680 00:34:27.098 [2024-05-15 15:52:40.052426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:27.098 [2024-05-15 15:52:40.052458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.098 [2024-05-15 15:52:40.063339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190fe720 00:34:27.098 [2024-05-15 15:52:40.065018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:27.098 [2024-05-15 15:52:40.065059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:27.098 [2024-05-15 15:52:40.076623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190ecc78 00:34:27.098 [2024-05-15 15:52:40.078461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:27.098 [2024-05-15 15:52:40.078502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:27.098 [2024-05-15 15:52:40.089865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190eb328 00:34:27.098 [2024-05-15 15:52:40.091874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:27.098 [2024-05-15 15:52:40.091907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:27.098 [2024-05-15 15:52:40.101241] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb1d0) with pdu=0x2000190e27f0 00:34:27.098 [2024-05-15 15:52:40.102401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:27.098 [2024-05-15 15:52:40.102433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:27.098 00:34:27.098 Latency(us) 00:34:27.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:27.098 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:27.098 nvme0n1 : 2.00 20255.00 79.12 0.00 0.00 6307.88 2936.98 15534.46 00:34:27.098 =================================================================================================================== 00:34:27.098 Total : 20255.00 79.12 0.00 0.00 6307.88 2936.98 15534.46 00:34:27.098 0 00:34:27.098 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:27.098 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:27.098 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:27.098 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:27.098 | .driver_specific 00:34:27.098 | .nvme_error 00:34:27.098 | .status_code 00:34:27.098 | .command_transient_transport_error' 00:34:27.357 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 159 > 0 )) 00:34:27.357 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1472158 00:34:27.357 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1472158 ']' 00:34:27.357 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1472158 00:34:27.357 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:34:27.357 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:27.357 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1472158 00:34:27.357 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:27.357 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:27.357 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1472158' 00:34:27.357 killing process with pid 1472158 00:34:27.357 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1472158 00:34:27.357 Received shutdown signal, test time was about 2.000000 seconds 00:34:27.357 00:34:27.357 Latency(us) 00:34:27.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:27.357 =================================================================================================================== 00:34:27.357 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:27.357 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1472158 00:34:27.615 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:34:27.615 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:27.615 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:27.615 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:27.615 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:27.615 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1472655 00:34:27.615 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:34:27.615 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1472655 /var/tmp/bperf.sock 00:34:27.615 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1472655 ']' 00:34:27.615 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:27.615 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:27.615 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:27.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:27.615 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:27.615 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:27.615 [2024-05-15 15:52:40.705999] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:34:27.615 [2024-05-15 15:52:40.706088] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472655 ] 00:34:27.615 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:27.615 Zero copy mechanism will not be used. 00:34:27.873 EAL: No free 2048 kB hugepages reported on node 1 00:34:27.873 [2024-05-15 15:52:40.742101] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:27.873 [2024-05-15 15:52:40.773980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:27.873 [2024-05-15 15:52:40.855249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:27.873 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:27.873 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:34:27.873 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:27.873 15:52:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:28.169 15:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:28.169 15:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.169 15:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:28.169 15:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.169 15:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:28.169 15:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:28.450 nvme0n1 00:34:28.709 15:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:28.709 15:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.709 15:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:28.709 15:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.709 15:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:28.709 15:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:28.709 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:28.709 Zero copy mechanism will not be used. 00:34:28.709 Running I/O for 2 seconds... 00:34:28.709 [2024-05-15 15:52:41.677739] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.709 [2024-05-15 15:52:41.678109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.709 [2024-05-15 15:52:41.678154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:28.709 [2024-05-15 15:52:41.686244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.709 [2024-05-15 15:52:41.686719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.709 [2024-05-15 15:52:41.686750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:28.709 [2024-05-15 15:52:41.695878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.709 [2024-05-15 15:52:41.696233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.709 [2024-05-15 15:52:41.696267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:28.709 [2024-05-15 15:52:41.705734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.709 [2024-05-15 15:52:41.706097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.709 [2024-05-15 15:52:41.706130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:28.709 [2024-05-15 15:52:41.715501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.709 [2024-05-15 15:52:41.715866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.709 [2024-05-15 15:52:41.715899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:28.709 [2024-05-15 15:52:41.725660] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.709 [2024-05-15 15:52:41.726018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.709 [2024-05-15 15:52:41.726061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:28.709 [2024-05-15 15:52:41.735855] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.709 [2024-05-15 15:52:41.736206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.709 [2024-05-15 15:52:41.736246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:28.709 [2024-05-15 15:52:41.745832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.709 [2024-05-15 15:52:41.746193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.709 [2024-05-15 15:52:41.746236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:28.709 [2024-05-15 15:52:41.755616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.709 [2024-05-15 15:52:41.756002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.709 [2024-05-15 15:52:41.756035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:28.709 [2024-05-15 15:52:41.765467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.709 [2024-05-15 15:52:41.765822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.709 [2024-05-15 15:52:41.765853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:28.709 [2024-05-15 15:52:41.775760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.709 [2024-05-15 15:52:41.776110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.709 [2024-05-15 15:52:41.776157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:28.709 [2024-05-15 15:52:41.785680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.709 [2024-05-15 15:52:41.785822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.709 [2024-05-15 15:52:41.785850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:28.709 [2024-05-15 15:52:41.795657] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.709 [2024-05-15 15:52:41.795982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.709 [2024-05-15 15:52:41.796014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:28.709 [2024-05-15 15:52:41.804799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.709 [2024-05-15 15:52:41.804900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.709 [2024-05-15 15:52:41.804928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:28.968 [2024-05-15 15:52:41.814027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.968 [2024-05-15 15:52:41.814380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.968 [2024-05-15 15:52:41.814411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:28.968 [2024-05-15 15:52:41.824089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.968 [2024-05-15 15:52:41.824451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.968 [2024-05-15 15:52:41.824481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:28.968 [2024-05-15 15:52:41.834041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.968 [2024-05-15 15:52:41.834415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.968 [2024-05-15 15:52:41.834459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:28.968 [2024-05-15 15:52:41.843467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.968 [2024-05-15 15:52:41.843627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.968 [2024-05-15 15:52:41.843654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:28.968 [2024-05-15 15:52:41.853068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.968 [2024-05-15 15:52:41.853420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.968 [2024-05-15 15:52:41.853450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:28.968 [2024-05-15 15:52:41.862985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.968 [2024-05-15 15:52:41.863329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.968 [2024-05-15 15:52:41.863361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:28.969 [2024-05-15 15:52:41.872670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.969 [2024-05-15 15:52:41.873008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.969 [2024-05-15 15:52:41.873036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:28.969 [2024-05-15 15:52:41.882092] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.969 [2024-05-15 15:52:41.882410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.969 [2024-05-15 15:52:41.882441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:28.969 [2024-05-15 15:52:41.892138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.969 [2024-05-15 15:52:41.892448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.969 [2024-05-15 15:52:41.892478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:28.969 [2024-05-15 15:52:41.902973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.969 [2024-05-15 15:52:41.903333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.969 [2024-05-15 15:52:41.903377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:28.969 [2024-05-15 15:52:41.912968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.969 [2024-05-15 15:52:41.913314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.969 [2024-05-15 15:52:41.913344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:28.969 [2024-05-15 15:52:41.922312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.969 [2024-05-15 15:52:41.922655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.969 [2024-05-15 15:52:41.922686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:28.969 [2024-05-15 15:52:41.931111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.969 [2024-05-15 15:52:41.931445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.969 [2024-05-15 15:52:41.931475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:28.969 [2024-05-15 15:52:41.940966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.969 [2024-05-15 15:52:41.941307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.969 [2024-05-15 15:52:41.941338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:28.969 [2024-05-15 15:52:41.950856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.969 [2024-05-15 15:52:41.951183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.969 [2024-05-15 15:52:41.951259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:28.969 [2024-05-15 15:52:41.960763] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.969 [2024-05-15 15:52:41.961095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.969 [2024-05-15 15:52:41.961122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:28.969 [2024-05-15 15:52:41.970704] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.969 [2024-05-15 15:52:41.971034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.969 [2024-05-15 15:52:41.971077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:28.969 [2024-05-15 15:52:41.980234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.969 [2024-05-15 15:52:41.980589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.969 [2024-05-15 15:52:41.980639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:28.969 [2024-05-15 15:52:41.989809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.969 [2024-05-15 15:52:41.989940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.969 [2024-05-15 15:52:41.989968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:28.969 [2024-05-15 15:52:41.999322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.969 [2024-05-15 15:52:41.999454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.969 [2024-05-15 15:52:41.999482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:28.969 [2024-05-15 15:52:42.009102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.969 [2024-05-15 15:52:42.009464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.969 [2024-05-15 15:52:42.009509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:28.969 [2024-05-15 15:52:42.019301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.969 [2024-05-15 15:52:42.019656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.969 [2024-05-15 15:52:42.019703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:28.969 [2024-05-15 15:52:42.029045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.969 [2024-05-15 15:52:42.029419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.969 [2024-05-15 15:52:42.029461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:28.969 [2024-05-15 15:52:42.038904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.969 [2024-05-15 15:52:42.039243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.969 [2024-05-15 15:52:42.039273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:28.969 [2024-05-15 15:52:42.048383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.969 [2024-05-15 15:52:42.048540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.970 [2024-05-15 15:52:42.048568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:28.970 [2024-05-15 15:52:42.058306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.970 [2024-05-15 15:52:42.058676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.970 [2024-05-15 15:52:42.058719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:28.970 [2024-05-15 15:52:42.068301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:28.970 [2024-05-15 15:52:42.068678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.970 [2024-05-15 15:52:42.068721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.078073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.078419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.078448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.087711] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.088036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.088066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.096799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.097140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.097167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.106347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.106703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.106746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.115857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.116061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.116088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.125870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.126223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.126253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.135202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.135548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.135576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.144391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.144730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.144758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.154152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.154531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.154576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.164008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.164378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.164423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.173424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.173751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.173780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.183099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.183465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.183510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.192598] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.192936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.192982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.202627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.202977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.203005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.211900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.212268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.212313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.221278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.221605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.221634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.230723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.231068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.231118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.240287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.240627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.240655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.249155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.249468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.249498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.258432] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.258773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.258803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.268269] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.268665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.268698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.277298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.277609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.277638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.286942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.287315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.287344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.296535] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.296894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.296921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.306235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.306587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.306615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.315896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.229 [2024-05-15 15:52:42.316250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.229 [2024-05-15 15:52:42.316279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.229 [2024-05-15 15:52:42.325354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.230 [2024-05-15 15:52:42.325718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.230 [2024-05-15 15:52:42.325764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.489 [2024-05-15 15:52:42.335161] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.489 [2024-05-15 15:52:42.335521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.489 [2024-05-15 15:52:42.335549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.489 [2024-05-15 15:52:42.345103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.489 [2024-05-15 15:52:42.345435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.489 [2024-05-15 15:52:42.345482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.489 [2024-05-15 15:52:42.354450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.489 [2024-05-15 15:52:42.354785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.489 [2024-05-15 15:52:42.354830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.489 [2024-05-15 15:52:42.364412] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.489 [2024-05-15 15:52:42.364740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.489 [2024-05-15 15:52:42.364783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.489 [2024-05-15 15:52:42.374820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.489 [2024-05-15 15:52:42.375163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.489 [2024-05-15 15:52:42.375208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.489 [2024-05-15 15:52:42.384015] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.489 [2024-05-15 15:52:42.384374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.489 [2024-05-15 15:52:42.384403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.489 [2024-05-15 15:52:42.392928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.489 [2024-05-15 15:52:42.393084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.489 [2024-05-15 15:52:42.393111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.489 [2024-05-15 15:52:42.402415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.489 [2024-05-15 15:52:42.402769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.489 [2024-05-15 15:52:42.402797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.489 [2024-05-15 15:52:42.412355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.489 [2024-05-15 15:52:42.412681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.489 [2024-05-15 15:52:42.412709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.489 [2024-05-15 15:52:42.421527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.489 [2024-05-15 15:52:42.421836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.489 [2024-05-15 15:52:42.421866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.489 [2024-05-15 15:52:42.430997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.489 [2024-05-15 15:52:42.431314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.489 [2024-05-15 15:52:42.431344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.489 [2024-05-15 15:52:42.441607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.489 [2024-05-15 15:52:42.441929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.489 [2024-05-15 15:52:42.441958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.489 [2024-05-15 15:52:42.451066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.489 [2024-05-15 15:52:42.451252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.489 [2024-05-15 15:52:42.451280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.489 [2024-05-15 15:52:42.460947] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.489 [2024-05-15 15:52:42.461111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.489 [2024-05-15 15:52:42.461139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.489 [2024-05-15 15:52:42.469622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.489 [2024-05-15 15:52:42.469958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.489 [2024-05-15 15:52:42.469987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.489 [2024-05-15 15:52:42.479463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.489 [2024-05-15 15:52:42.479800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.489 [2024-05-15 15:52:42.479830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.489 [2024-05-15 15:52:42.489089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.489 [2024-05-15 15:52:42.489453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.489 [2024-05-15 15:52:42.489482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.489 [2024-05-15 15:52:42.498626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.489 [2024-05-15 15:52:42.498967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.489 [2024-05-15 15:52:42.498995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.489 [2024-05-15 15:52:42.507261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.489 [2024-05-15 15:52:42.507590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.489 [2024-05-15 15:52:42.507618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.489 [2024-05-15 15:52:42.517013] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.489 [2024-05-15 15:52:42.517378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.489 [2024-05-15 15:52:42.517422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.489 [2024-05-15 15:52:42.526717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.489 [2024-05-15 15:52:42.527069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.489 [2024-05-15 15:52:42.527096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.489 [2024-05-15 15:52:42.536490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.489 [2024-05-15 15:52:42.536830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.489 [2024-05-15 15:52:42.536872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.489 [2024-05-15 15:52:42.545904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.489 [2024-05-15 15:52:42.546250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.489 [2024-05-15 15:52:42.546280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.490 [2024-05-15 15:52:42.554778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.490 [2024-05-15 15:52:42.555123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.490 [2024-05-15 15:52:42.555151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.490 [2024-05-15 15:52:42.564352] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.490 [2024-05-15 15:52:42.564710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.490 [2024-05-15 15:52:42.564739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.490 [2024-05-15 15:52:42.574060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.490 [2024-05-15 15:52:42.574407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.490 [2024-05-15 15:52:42.574437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.490 [2024-05-15 15:52:42.583543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.490 [2024-05-15 15:52:42.583854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.490 [2024-05-15 15:52:42.583883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.593090] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.593431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.593462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.601886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.602241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.602271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.611523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.611866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.611914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.620984] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.621345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.621391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.630590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.630757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.630784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.639899] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.640255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.640304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.649196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.649517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.649547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.658992] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.659358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.659399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.668796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.669136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.669178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.678181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.678501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.678541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.687124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.687453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.687508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.696386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.696580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.696607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.706138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.706456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.706485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.715712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.715883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.715910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.725649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.725993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.726022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.735496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.735820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.735850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.744969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.745304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.745334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.753975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.754347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.754376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.762518] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.762862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.762889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.771502] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.771667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.771695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.780954] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.781120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.781149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.789978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.790377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.790406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.799227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.799538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.799567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.809046] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.809379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.809408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.818526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.818837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.818866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.827134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.827450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.827479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.836677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.836986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.837015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.749 [2024-05-15 15:52:42.846338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:29.749 [2024-05-15 15:52:42.846674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.749 [2024-05-15 15:52:42.846721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.008 [2024-05-15 15:52:42.855938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.008 [2024-05-15 15:52:42.856301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.008 [2024-05-15 15:52:42.856344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.008 [2024-05-15 15:52:42.865399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.008 [2024-05-15 15:52:42.865734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.008 [2024-05-15 15:52:42.865762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.008 [2024-05-15 15:52:42.875024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.008 [2024-05-15 15:52:42.875382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.008 [2024-05-15 15:52:42.875411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.008 [2024-05-15 15:52:42.884957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.008 [2024-05-15 15:52:42.885305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.008 [2024-05-15 15:52:42.885341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.008 [2024-05-15 15:52:42.893769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.008 [2024-05-15 15:52:42.894080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.008 [2024-05-15 15:52:42.894109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.008 [2024-05-15 15:52:42.903495] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.008 [2024-05-15 15:52:42.903847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.008 [2024-05-15 15:52:42.903875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.008 [2024-05-15 15:52:42.913685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.008 [2024-05-15 15:52:42.913858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.008 [2024-05-15 15:52:42.913885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.008 [2024-05-15 15:52:42.922604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.008 [2024-05-15 15:52:42.922781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.008 [2024-05-15 15:52:42.922809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.008 [2024-05-15 15:52:42.931842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.008 [2024-05-15 15:52:42.932228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.008 [2024-05-15 15:52:42.932257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.008 [2024-05-15 15:52:42.940653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.008 [2024-05-15 15:52:42.941042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.008 [2024-05-15 15:52:42.941085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.008 [2024-05-15 15:52:42.948837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.009 [2024-05-15 15:52:42.949163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.009 [2024-05-15 15:52:42.949191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.009 [2024-05-15 15:52:42.957083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.009 [2024-05-15 15:52:42.957393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.009 [2024-05-15 15:52:42.957422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.009 [2024-05-15 15:52:42.964878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.009 [2024-05-15 15:52:42.965228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.009 [2024-05-15 15:52:42.965256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.009 [2024-05-15 15:52:42.972824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.009 [2024-05-15 15:52:42.973147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.009 [2024-05-15 15:52:42.973175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.009 [2024-05-15 15:52:42.980017] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.009 [2024-05-15 15:52:42.980316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.009 [2024-05-15 15:52:42.980345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.009 [2024-05-15 15:52:42.988749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.009 [2024-05-15 15:52:42.989084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.009 [2024-05-15 15:52:42.989113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.009 [2024-05-15 15:52:42.996999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.009 [2024-05-15 15:52:42.997365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.009 [2024-05-15 15:52:42.997394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.009 [2024-05-15 15:52:43.005125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.009 [2024-05-15 15:52:43.005427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.009 [2024-05-15 15:52:43.005470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.009 [2024-05-15 15:52:43.013507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.009 [2024-05-15 15:52:43.013864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.009 [2024-05-15 15:52:43.013892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.009 [2024-05-15 15:52:43.021241] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.009 [2024-05-15 15:52:43.021610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.009 [2024-05-15 15:52:43.021638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.009 [2024-05-15 15:52:43.029401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.009 [2024-05-15 15:52:43.029744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.009 [2024-05-15 15:52:43.029772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.009 [2024-05-15 15:52:43.038170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.009 [2024-05-15 15:52:43.038565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.009 [2024-05-15 15:52:43.038594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.009 [2024-05-15 15:52:43.045820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.009 [2024-05-15 15:52:43.046114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.009 [2024-05-15 15:52:43.046143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.009 [2024-05-15 15:52:43.052981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.009 [2024-05-15 15:52:43.053326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.009 [2024-05-15 15:52:43.053354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.009 [2024-05-15 15:52:43.060551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.009 [2024-05-15 15:52:43.060874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.009 [2024-05-15 15:52:43.060903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.009 [2024-05-15 15:52:43.068671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.009 [2024-05-15 15:52:43.069001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.009 [2024-05-15 15:52:43.069029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.009 [2024-05-15 15:52:43.076892] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.009 [2024-05-15 15:52:43.077262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.009 [2024-05-15 15:52:43.077291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.009 [2024-05-15 15:52:43.084260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.009 [2024-05-15 15:52:43.084613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.009 [2024-05-15 15:52:43.084642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.009 [2024-05-15 15:52:43.092446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.009 [2024-05-15 15:52:43.092759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.009 [2024-05-15 15:52:43.092801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.009 [2024-05-15 15:52:43.100074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.009 [2024-05-15 15:52:43.100400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.009 [2024-05-15 15:52:43.100436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.009 [2024-05-15 15:52:43.107792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.009 [2024-05-15 15:52:43.108088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.009 [2024-05-15 15:52:43.108118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.268 [2024-05-15 15:52:43.116516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.268 [2024-05-15 15:52:43.116929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.268 [2024-05-15 15:52:43.116958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.268 [2024-05-15 15:52:43.125554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.268 [2024-05-15 15:52:43.125957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.268 [2024-05-15 15:52:43.125985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.134438] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.134878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.134906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.143452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.143839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.143868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.152670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.152974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.153003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.160612] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.160936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.160965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.169429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.169818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.169862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.178408] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.178815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.178843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.186354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.186832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.186860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.195901] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.196195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.196256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.204543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.204918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.204946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.212851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.213154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.213182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.220683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.221023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.221052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.228668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.229020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.229049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.236471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.236874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.236903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.245394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.245689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.245724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.252948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.253317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.253356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.260595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.260890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.260918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.268433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.268746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.268774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.276196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.276594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.276622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.284767] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.285055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.285083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.291921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.292225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.292253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.300392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.300732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.300761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.308127] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.308479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.308507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.315997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.316337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.316367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.323960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.324269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.324297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.331763] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.332012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.332041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.339459] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.339776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.339804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.346987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.347285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.347313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.355308] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.355617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.355646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.269 [2024-05-15 15:52:43.362791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.269 [2024-05-15 15:52:43.363055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.269 [2024-05-15 15:52:43.363084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.527 [2024-05-15 15:52:43.370328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.527 [2024-05-15 15:52:43.370580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.527 [2024-05-15 15:52:43.370609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.527 [2024-05-15 15:52:43.377289] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.527 [2024-05-15 15:52:43.377539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.527 [2024-05-15 15:52:43.377569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.527 [2024-05-15 15:52:43.384772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.527 [2024-05-15 15:52:43.385082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.527 [2024-05-15 15:52:43.385110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.527 [2024-05-15 15:52:43.392322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.527 [2024-05-15 15:52:43.392582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.527 [2024-05-15 15:52:43.392610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.527 [2024-05-15 15:52:43.399353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.527 [2024-05-15 15:52:43.399604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.527 [2024-05-15 15:52:43.399633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.527 [2024-05-15 15:52:43.407077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.527 [2024-05-15 15:52:43.407356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.527 [2024-05-15 15:52:43.407385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.527 [2024-05-15 15:52:43.415167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.527 [2024-05-15 15:52:43.415506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.527 [2024-05-15 15:52:43.415534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.527 [2024-05-15 15:52:43.422826] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.527 [2024-05-15 15:52:43.423146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.527 [2024-05-15 15:52:43.423175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.527 [2024-05-15 15:52:43.430565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.527 [2024-05-15 15:52:43.430857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.527 [2024-05-15 15:52:43.430885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.527 [2024-05-15 15:52:43.438583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.527 [2024-05-15 15:52:43.438870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.527 [2024-05-15 15:52:43.438899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.527 [2024-05-15 15:52:43.446040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.527 [2024-05-15 15:52:43.446299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.527 [2024-05-15 15:52:43.446332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.527 [2024-05-15 15:52:43.453950] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.527 [2024-05-15 15:52:43.454255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.527 [2024-05-15 15:52:43.454283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.527 [2024-05-15 15:52:43.461712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.527 [2024-05-15 15:52:43.462025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.527 [2024-05-15 15:52:43.462053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.527 [2024-05-15 15:52:43.470082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.527 [2024-05-15 15:52:43.470406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.527 [2024-05-15 15:52:43.470434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.527 [2024-05-15 15:52:43.477931] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.527 [2024-05-15 15:52:43.478260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.527 [2024-05-15 15:52:43.478289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.527 [2024-05-15 15:52:43.485735] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.527 [2024-05-15 15:52:43.485995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.527 [2024-05-15 15:52:43.486023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.527 [2024-05-15 15:52:43.493876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.527 [2024-05-15 15:52:43.494126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.527 [2024-05-15 15:52:43.494155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.527 [2024-05-15 15:52:43.500844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.527 [2024-05-15 15:52:43.501136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.527 [2024-05-15 15:52:43.501164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.528 [2024-05-15 15:52:43.508815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.528 [2024-05-15 15:52:43.509104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.528 [2024-05-15 15:52:43.509132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.528 [2024-05-15 15:52:43.517434] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.528 [2024-05-15 15:52:43.517821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.528 [2024-05-15 15:52:43.517850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.528 [2024-05-15 15:52:43.526415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.528 [2024-05-15 15:52:43.526698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.528 [2024-05-15 15:52:43.526727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.528 [2024-05-15 15:52:43.535358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.528 [2024-05-15 15:52:43.535665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.528 [2024-05-15 15:52:43.535708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.528 [2024-05-15 15:52:43.544213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.528 [2024-05-15 15:52:43.544641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.528 [2024-05-15 15:52:43.544669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.528 [2024-05-15 15:52:43.553121] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.528 [2024-05-15 15:52:43.553485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.528 [2024-05-15 15:52:43.553514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.528 [2024-05-15 15:52:43.561903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.528 [2024-05-15 15:52:43.562278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.528 [2024-05-15 15:52:43.562311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.528 [2024-05-15 15:52:43.570865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.528 [2024-05-15 15:52:43.571144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.528 [2024-05-15 15:52:43.571173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.528 [2024-05-15 15:52:43.579558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.528 [2024-05-15 15:52:43.579847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.528 [2024-05-15 15:52:43.579875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.528 [2024-05-15 15:52:43.588470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.528 [2024-05-15 15:52:43.588769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.528 [2024-05-15 15:52:43.588798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.528 [2024-05-15 15:52:43.597272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.528 [2024-05-15 15:52:43.597555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.528 [2024-05-15 15:52:43.597583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.528 [2024-05-15 15:52:43.605900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.528 [2024-05-15 15:52:43.606188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.528 [2024-05-15 15:52:43.606238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.528 [2024-05-15 15:52:43.614630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.528 [2024-05-15 15:52:43.614952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.528 [2024-05-15 15:52:43.614981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.528 [2024-05-15 15:52:43.623580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.528 [2024-05-15 15:52:43.624014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.528 [2024-05-15 15:52:43.624042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.785 [2024-05-15 15:52:43.632324] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.785 [2024-05-15 15:52:43.632710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.785 [2024-05-15 15:52:43.632739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.785 [2024-05-15 15:52:43.641337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.785 [2024-05-15 15:52:43.641616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.785 [2024-05-15 15:52:43.641644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:30.785 [2024-05-15 15:52:43.650513] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.785 [2024-05-15 15:52:43.650895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.785 [2024-05-15 15:52:43.650926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.785 [2024-05-15 15:52:43.659555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.785 [2024-05-15 15:52:43.659843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.785 [2024-05-15 15:52:43.659872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:30.785 [2024-05-15 15:52:43.668270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13bb510) with pdu=0x2000190fef90 00:34:30.785 [2024-05-15 15:52:43.668544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.785 [2024-05-15 15:52:43.668577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:30.785 00:34:30.785 Latency(us) 00:34:30.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:30.785 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:30.785 nvme0n1 : 2.00 3438.48 429.81 0.00 0.00 4642.30 3203.98 11747.93 00:34:30.785 =================================================================================================================== 00:34:30.785 Total : 3438.48 429.81 0.00 0.00 4642.30 3203.98 11747.93 00:34:30.785 0 00:34:30.785 15:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:30.785 15:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:30.785 15:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:30.785 15:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:30.785 | .driver_specific 00:34:30.785 | .nvme_error 00:34:30.785 | .status_code 00:34:30.785 | .command_transient_transport_error' 00:34:31.044 15:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 222 > 0 )) 00:34:31.044 15:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1472655 00:34:31.044 15:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1472655 ']' 00:34:31.044 15:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1472655 00:34:31.044 15:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:34:31.044 15:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:31.044 15:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1472655 00:34:31.044 15:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:31.044 15:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:31.044 15:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1472655' 00:34:31.044 killing process with pid 1472655 00:34:31.044 15:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1472655 00:34:31.044 Received shutdown signal, test time was about 2.000000 seconds 00:34:31.044 00:34:31.044 Latency(us) 00:34:31.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:31.044 =================================================================================================================== 00:34:31.044 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:31.044 15:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1472655 00:34:31.302 15:52:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1471292 00:34:31.302 15:52:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1471292 ']' 00:34:31.302 15:52:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1471292 00:34:31.302 15:52:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:34:31.302 15:52:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:31.302 15:52:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1471292 00:34:31.302 15:52:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:31.302 15:52:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:31.302 15:52:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1471292' 00:34:31.302 killing process with pid 1471292 00:34:31.302 15:52:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1471292 00:34:31.302 [2024-05-15 15:52:44.208317] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:34:31.302 15:52:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1471292 00:34:31.561 00:34:31.561 real 0m15.047s 00:34:31.561 user 0m29.733s 00:34:31.561 sys 0m4.192s 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:31.561 ************************************ 00:34:31.561 END TEST nvmf_digest_error 00:34:31.561 ************************************ 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:31.561 rmmod nvme_tcp 00:34:31.561 rmmod nvme_fabrics 00:34:31.561 rmmod nvme_keyring 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1471292 ']' 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1471292 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 1471292 ']' 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 1471292 00:34:31.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1471292) - No such process 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 1471292 is not found' 00:34:31.561 Process with pid 1471292 is not found 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:31.561 15:52:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:33.465 15:52:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:33.465 00:34:33.465 real 0m35.146s 00:34:33.465 user 1m1.098s 00:34:33.465 sys 0m10.071s 00:34:33.465 15:52:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:33.465 15:52:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:33.465 ************************************ 00:34:33.465 END TEST nvmf_digest 00:34:33.465 ************************************ 00:34:33.723 15:52:46 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:34:33.724 15:52:46 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:34:33.724 15:52:46 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:34:33.724 15:52:46 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:33.724 15:52:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:33.724 15:52:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:33.724 15:52:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:33.724 ************************************ 00:34:33.724 START TEST nvmf_bdevperf 00:34:33.724 ************************************ 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:33.724 * Looking for test storage... 00:34:33.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:34:33.724 15:52:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:36.252 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:36.252 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:34:36.252 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:36.252 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:36.252 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:36.252 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:36.252 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:36.252 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:34:36.252 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:36.252 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:34:36.252 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:34:36.252 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:34:36.252 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:34:36.252 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:34:36.252 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:34:36.252 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:36.252 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:36.252 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:36.252 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:36.252 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:36.253 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:36.253 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:36.253 Found net devices under 0000:09:00.0: cvl_0_0 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:36.253 Found net devices under 0000:09:00.1: cvl_0_1 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:36.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:36.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:34:36.253 00:34:36.253 --- 10.0.0.2 ping statistics --- 00:34:36.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:36.253 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:36.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:36.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:34:36.253 00:34:36.253 --- 10.0.0.1 ping statistics --- 00:34:36.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:36.253 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1475301 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1475301 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 1475301 ']' 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:36.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:36.253 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:36.253 [2024-05-15 15:52:49.330543] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:34:36.253 [2024-05-15 15:52:49.330615] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:36.511 EAL: No free 2048 kB hugepages reported on node 1 00:34:36.511 [2024-05-15 15:52:49.373786] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:36.511 [2024-05-15 15:52:49.404618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:36.511 [2024-05-15 15:52:49.485381] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:36.511 [2024-05-15 15:52:49.485433] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:36.511 [2024-05-15 15:52:49.485446] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:36.511 [2024-05-15 15:52:49.485457] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:36.511 [2024-05-15 15:52:49.485467] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:36.511 [2024-05-15 15:52:49.485596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:36.511 [2024-05-15 15:52:49.485659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:36.511 [2024-05-15 15:52:49.485661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:36.511 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:36.511 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:34:36.511 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:36.511 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:36.511 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:36.511 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:36.511 15:52:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:36.511 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.511 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:36.769 [2024-05-15 15:52:49.615172] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:36.769 Malloc0 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:36.769 [2024-05-15 15:52:49.673525] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:34:36.769 [2024-05-15 15:52:49.673865] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:36.769 { 00:34:36.769 "params": { 00:34:36.769 "name": "Nvme$subsystem", 00:34:36.769 "trtype": "$TEST_TRANSPORT", 00:34:36.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:36.769 "adrfam": "ipv4", 00:34:36.769 "trsvcid": "$NVMF_PORT", 00:34:36.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:36.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:36.769 "hdgst": ${hdgst:-false}, 00:34:36.769 "ddgst": ${ddgst:-false} 00:34:36.769 }, 00:34:36.769 "method": "bdev_nvme_attach_controller" 00:34:36.769 } 00:34:36.769 EOF 00:34:36.769 )") 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:36.769 15:52:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:36.769 "params": { 00:34:36.769 "name": "Nvme1", 00:34:36.769 "trtype": "tcp", 00:34:36.769 "traddr": "10.0.0.2", 00:34:36.769 "adrfam": "ipv4", 00:34:36.769 "trsvcid": "4420", 00:34:36.769 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:36.769 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:36.769 "hdgst": false, 00:34:36.769 "ddgst": false 00:34:36.769 }, 00:34:36.769 "method": "bdev_nvme_attach_controller" 00:34:36.769 }' 00:34:36.769 [2024-05-15 15:52:49.723188] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:34:36.770 [2024-05-15 15:52:49.723288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1475360 ] 00:34:36.770 EAL: No free 2048 kB hugepages reported on node 1 00:34:36.770 [2024-05-15 15:52:49.763609] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:36.770 [2024-05-15 15:52:49.797120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:37.027 [2024-05-15 15:52:49.879323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:37.027 Running I/O for 1 seconds... 00:34:37.960 00:34:37.960 Latency(us) 00:34:37.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:37.960 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:37.960 Verification LBA range: start 0x0 length 0x4000 00:34:37.960 Nvme1n1 : 1.00 8210.25 32.07 0.00 0.00 15523.74 922.36 16311.18 00:34:37.960 =================================================================================================================== 00:34:37.960 Total : 8210.25 32.07 0.00 0.00 15523.74 922.36 16311.18 00:34:38.217 15:52:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1475581 00:34:38.217 15:52:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:34:38.217 15:52:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:34:38.217 15:52:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:34:38.217 15:52:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:38.217 15:52:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:38.217 15:52:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:38.217 15:52:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:38.217 { 00:34:38.217 "params": { 00:34:38.217 "name": "Nvme$subsystem", 00:34:38.217 "trtype": "$TEST_TRANSPORT", 00:34:38.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:38.217 "adrfam": "ipv4", 00:34:38.217 "trsvcid": "$NVMF_PORT", 00:34:38.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:38.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:38.217 "hdgst": ${hdgst:-false}, 00:34:38.217 "ddgst": ${ddgst:-false} 00:34:38.217 }, 00:34:38.217 "method": "bdev_nvme_attach_controller" 00:34:38.217 } 00:34:38.217 EOF 00:34:38.217 )") 00:34:38.217 15:52:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:38.217 15:52:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:38.217 15:52:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:38.217 15:52:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:38.217 "params": { 00:34:38.217 "name": "Nvme1", 00:34:38.217 "trtype": "tcp", 00:34:38.217 "traddr": "10.0.0.2", 00:34:38.217 "adrfam": "ipv4", 00:34:38.217 "trsvcid": "4420", 00:34:38.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:38.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:38.217 "hdgst": false, 00:34:38.217 "ddgst": false 00:34:38.217 }, 00:34:38.217 "method": "bdev_nvme_attach_controller" 00:34:38.217 }' 00:34:38.218 [2024-05-15 15:52:51.318334] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:34:38.218 [2024-05-15 15:52:51.318423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1475581 ] 00:34:38.475 EAL: No free 2048 kB hugepages reported on node 1 00:34:38.475 [2024-05-15 15:52:51.355094] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:38.475 [2024-05-15 15:52:51.389486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:38.475 [2024-05-15 15:52:51.476599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:38.733 Running I/O for 15 seconds... 00:34:41.261 15:52:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1475301 00:34:41.261 15:52:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:34:41.261 [2024-05-15 15:52:54.288047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:33360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.261 [2024-05-15 15:52:54.288104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.261 [2024-05-15 15:52:54.288164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.261 [2024-05-15 15:52:54.288200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.261 [2024-05-15 15:52:54.288262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.261 [2024-05-15 15:52:54.288295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.261 [2024-05-15 15:52:54.288326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.261 [2024-05-15 15:52:54.288358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.261 [2024-05-15 15:52:54.288392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.261 [2024-05-15 15:52:54.288427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.261 [2024-05-15 15:52:54.288459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.261 [2024-05-15 15:52:54.288504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.261 [2024-05-15 15:52:54.288540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.261 [2024-05-15 15:52:54.288573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.261 [2024-05-15 15:52:54.288606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.261 [2024-05-15 15:52:54.288647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.261 [2024-05-15 15:52:54.288681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.261 [2024-05-15 15:52:54.288717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.261 [2024-05-15 15:52:54.288752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.261 [2024-05-15 15:52:54.288785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.261 [2024-05-15 15:52:54.288817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.261 [2024-05-15 15:52:54.288849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.261 [2024-05-15 15:52:54.288882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.261 [2024-05-15 15:52:54.288916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.261 [2024-05-15 15:52:54.288948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.261 [2024-05-15 15:52:54.288981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.288999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.261 [2024-05-15 15:52:54.289014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.289031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.261 [2024-05-15 15:52:54.289047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.289068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.261 [2024-05-15 15:52:54.289084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.289102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.261 [2024-05-15 15:52:54.289118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.289135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.261 [2024-05-15 15:52:54.289151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.289168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.261 [2024-05-15 15:52:54.289183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.289200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.261 [2024-05-15 15:52:54.289222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.289242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.261 [2024-05-15 15:52:54.289258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.261 [2024-05-15 15:52:54.289292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.261 [2024-05-15 15:52:54.289306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.289322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.289336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.289351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.289365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.289381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.289394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.289410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.289424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.289439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.289453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.289468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.289486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.289516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.289530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.289545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.289572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.289588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.289600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.289615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.289645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.289662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.289678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.289709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.289722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.289736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.289749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.289763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.289776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.289790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.289803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.289817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.289830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.289844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.289856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.289871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.289884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.289901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.289914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.289942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:33784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.289955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.289969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.289994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.290010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.290022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.290036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.290066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.290083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.290099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.290116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.290131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.290148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:33832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.290163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.290180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:33840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.290196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.290213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:33848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.290237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.290254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.290270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.290302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:33864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.290316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.290331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.290349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.290366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:33880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.290381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.290396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.290410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.290426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.290440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.290456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.290470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.290485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.290513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.290529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.290543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.290557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.290584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.290599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.290612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.290625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.262 [2024-05-15 15:52:54.290638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.262 [2024-05-15 15:52:54.290652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.290665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.290694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.290707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.290722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.290735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.290750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.290767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.290782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.290796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.290811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.290824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.290838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.290851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.290866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.290879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.290894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.290907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.290921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.290934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.290965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.290981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.290998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.263 [2024-05-15 15:52:54.291945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.263 [2024-05-15 15:52:54.291960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.264 [2024-05-15 15:52:54.291979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.264 [2024-05-15 15:52:54.291995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.264 [2024-05-15 15:52:54.292008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.264 [2024-05-15 15:52:54.292041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.264 [2024-05-15 15:52:54.292057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.264 [2024-05-15 15:52:54.292075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.264 [2024-05-15 15:52:54.292091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.264 [2024-05-15 15:52:54.292108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.264 [2024-05-15 15:52:54.292124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.264 [2024-05-15 15:52:54.292141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.264 [2024-05-15 15:52:54.292156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.264 [2024-05-15 15:52:54.292173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.264 [2024-05-15 15:52:54.292188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.264 [2024-05-15 15:52:54.292206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.264 [2024-05-15 15:52:54.292230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.264 [2024-05-15 15:52:54.292264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.264 [2024-05-15 15:52:54.292280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.264 [2024-05-15 15:52:54.292296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.264 [2024-05-15 15:52:54.292310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.264 [2024-05-15 15:52:54.292325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.264 [2024-05-15 15:52:54.292341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.264 [2024-05-15 15:52:54.292357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.264 [2024-05-15 15:52:54.292371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.264 [2024-05-15 15:52:54.292386] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f89680 is same with the state(5) to be set 00:34:41.264 [2024-05-15 15:52:54.292404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:41.264 [2024-05-15 15:52:54.292419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:41.264 [2024-05-15 15:52:54.292433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34376 len:8 PRP1 0x0 PRP2 0x0 00:34:41.264 [2024-05-15 15:52:54.292446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.264 [2024-05-15 15:52:54.292519] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f89680 was disconnected and freed. reset controller. 00:34:41.264 [2024-05-15 15:52:54.296322] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.264 [2024-05-15 15:52:54.296392] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.264 [2024-05-15 15:52:54.297189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-05-15 15:52:54.297333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-05-15 15:52:54.297360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.264 [2024-05-15 15:52:54.297377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.264 [2024-05-15 15:52:54.297624] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.264 [2024-05-15 15:52:54.297893] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.264 [2024-05-15 15:52:54.297913] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.264 [2024-05-15 15:52:54.297944] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.264 [2024-05-15 15:52:54.301556] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.264 [2024-05-15 15:52:54.310474] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.264 [2024-05-15 15:52:54.310971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-05-15 15:52:54.311178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-05-15 15:52:54.311204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.264 [2024-05-15 15:52:54.311230] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.264 [2024-05-15 15:52:54.311450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.264 [2024-05-15 15:52:54.311710] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.264 [2024-05-15 15:52:54.311734] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.264 [2024-05-15 15:52:54.311750] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.264 [2024-05-15 15:52:54.315328] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.264 [2024-05-15 15:52:54.324345] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.264 [2024-05-15 15:52:54.324826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-05-15 15:52:54.325007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-05-15 15:52:54.325041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.264 [2024-05-15 15:52:54.325075] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.264 [2024-05-15 15:52:54.325333] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.264 [2024-05-15 15:52:54.325575] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.264 [2024-05-15 15:52:54.325614] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.264 [2024-05-15 15:52:54.325630] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.264 [2024-05-15 15:52:54.329288] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.264 [2024-05-15 15:52:54.338353] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.264 [2024-05-15 15:52:54.338772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-05-15 15:52:54.338997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-05-15 15:52:54.339045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.264 [2024-05-15 15:52:54.339064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.264 [2024-05-15 15:52:54.339327] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.264 [2024-05-15 15:52:54.339578] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.264 [2024-05-15 15:52:54.339598] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.264 [2024-05-15 15:52:54.339612] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.264 [2024-05-15 15:52:54.343266] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.264 [2024-05-15 15:52:54.352413] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.264 [2024-05-15 15:52:54.352927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-05-15 15:52:54.353087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-05-15 15:52:54.353113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.264 [2024-05-15 15:52:54.353130] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.264 [2024-05-15 15:52:54.353356] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.264 [2024-05-15 15:52:54.353605] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.264 [2024-05-15 15:52:54.353625] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.264 [2024-05-15 15:52:54.353638] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.264 [2024-05-15 15:52:54.357295] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.523 [2024-05-15 15:52:54.366395] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.523 [2024-05-15 15:52:54.366875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.523 [2024-05-15 15:52:54.367084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.523 [2024-05-15 15:52:54.367113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.523 [2024-05-15 15:52:54.367131] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.523 [2024-05-15 15:52:54.367388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.523 [2024-05-15 15:52:54.367640] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.523 [2024-05-15 15:52:54.367670] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.523 [2024-05-15 15:52:54.367687] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.523 [2024-05-15 15:52:54.371355] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.523 [2024-05-15 15:52:54.380366] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.523 [2024-05-15 15:52:54.380838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.523 [2024-05-15 15:52:54.381006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.523 [2024-05-15 15:52:54.381050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.523 [2024-05-15 15:52:54.381069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.523 [2024-05-15 15:52:54.381328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.523 [2024-05-15 15:52:54.381562] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.523 [2024-05-15 15:52:54.381586] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.523 [2024-05-15 15:52:54.381603] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.523 [2024-05-15 15:52:54.385192] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.523 [2024-05-15 15:52:54.394227] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.523 [2024-05-15 15:52:54.394640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.523 [2024-05-15 15:52:54.394770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.523 [2024-05-15 15:52:54.394800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.523 [2024-05-15 15:52:54.394818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.523 [2024-05-15 15:52:54.395059] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.523 [2024-05-15 15:52:54.395323] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.523 [2024-05-15 15:52:54.395344] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.523 [2024-05-15 15:52:54.395358] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.523 [2024-05-15 15:52:54.398903] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.523 [2024-05-15 15:52:54.408262] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.523 [2024-05-15 15:52:54.408744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.523 [2024-05-15 15:52:54.408915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.523 [2024-05-15 15:52:54.408961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.523 [2024-05-15 15:52:54.408979] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.523 [2024-05-15 15:52:54.409230] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.523 [2024-05-15 15:52:54.409476] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.523 [2024-05-15 15:52:54.409500] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.523 [2024-05-15 15:52:54.409522] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.523 [2024-05-15 15:52:54.413126] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.523 [2024-05-15 15:52:54.422270] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.523 [2024-05-15 15:52:54.422698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.523 [2024-05-15 15:52:54.422900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.523 [2024-05-15 15:52:54.422945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.523 [2024-05-15 15:52:54.422964] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.523 [2024-05-15 15:52:54.423205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.523 [2024-05-15 15:52:54.423462] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.523 [2024-05-15 15:52:54.423486] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.523 [2024-05-15 15:52:54.423501] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.523 [2024-05-15 15:52:54.427101] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.524 [2024-05-15 15:52:54.436255] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.524 [2024-05-15 15:52:54.436670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.524 [2024-05-15 15:52:54.436846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.524 [2024-05-15 15:52:54.436896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.524 [2024-05-15 15:52:54.436915] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.524 [2024-05-15 15:52:54.437156] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.524 [2024-05-15 15:52:54.437412] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.524 [2024-05-15 15:52:54.437437] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.524 [2024-05-15 15:52:54.437453] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.524 [2024-05-15 15:52:54.441059] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.524 [2024-05-15 15:52:54.450198] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.524 [2024-05-15 15:52:54.450630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.524 [2024-05-15 15:52:54.450838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.524 [2024-05-15 15:52:54.450886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.524 [2024-05-15 15:52:54.450905] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.524 [2024-05-15 15:52:54.451146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.524 [2024-05-15 15:52:54.451401] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.524 [2024-05-15 15:52:54.451426] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.524 [2024-05-15 15:52:54.451442] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.524 [2024-05-15 15:52:54.455050] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.524 [2024-05-15 15:52:54.464195] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.524 [2024-05-15 15:52:54.464632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.524 [2024-05-15 15:52:54.464850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.524 [2024-05-15 15:52:54.464895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.524 [2024-05-15 15:52:54.464914] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.524 [2024-05-15 15:52:54.465155] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.524 [2024-05-15 15:52:54.465411] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.524 [2024-05-15 15:52:54.465436] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.524 [2024-05-15 15:52:54.465452] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.524 [2024-05-15 15:52:54.469054] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.524 [2024-05-15 15:52:54.478193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.524 [2024-05-15 15:52:54.478637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.524 [2024-05-15 15:52:54.478790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.524 [2024-05-15 15:52:54.478831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.524 [2024-05-15 15:52:54.478847] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.524 [2024-05-15 15:52:54.479096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.524 [2024-05-15 15:52:54.479354] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.524 [2024-05-15 15:52:54.479378] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.524 [2024-05-15 15:52:54.479394] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.524 [2024-05-15 15:52:54.482995] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.524 [2024-05-15 15:52:54.492136] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.524 [2024-05-15 15:52:54.492559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.524 [2024-05-15 15:52:54.492766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.524 [2024-05-15 15:52:54.492812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.524 [2024-05-15 15:52:54.492831] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.524 [2024-05-15 15:52:54.493072] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.524 [2024-05-15 15:52:54.493328] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.524 [2024-05-15 15:52:54.493353] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.524 [2024-05-15 15:52:54.493369] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.524 [2024-05-15 15:52:54.497008] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.524 [2024-05-15 15:52:54.506155] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.524 [2024-05-15 15:52:54.506559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.524 [2024-05-15 15:52:54.506783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.524 [2024-05-15 15:52:54.506830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.524 [2024-05-15 15:52:54.506849] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.524 [2024-05-15 15:52:54.507091] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.524 [2024-05-15 15:52:54.507347] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.524 [2024-05-15 15:52:54.507372] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.524 [2024-05-15 15:52:54.507389] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.524 [2024-05-15 15:52:54.510989] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.524 [2024-05-15 15:52:54.520130] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.524 [2024-05-15 15:52:54.520554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.524 [2024-05-15 15:52:54.520782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.524 [2024-05-15 15:52:54.520830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.524 [2024-05-15 15:52:54.520848] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.524 [2024-05-15 15:52:54.521089] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.524 [2024-05-15 15:52:54.521345] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.524 [2024-05-15 15:52:54.521369] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.524 [2024-05-15 15:52:54.521385] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.524 [2024-05-15 15:52:54.524987] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.524 [2024-05-15 15:52:54.534150] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.524 [2024-05-15 15:52:54.534650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.524 [2024-05-15 15:52:54.534821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.524 [2024-05-15 15:52:54.534872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.524 [2024-05-15 15:52:54.534891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.524 [2024-05-15 15:52:54.535133] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.524 [2024-05-15 15:52:54.535389] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.524 [2024-05-15 15:52:54.535414] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.524 [2024-05-15 15:52:54.535430] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.524 [2024-05-15 15:52:54.539033] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.524 [2024-05-15 15:52:54.548194] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.524 [2024-05-15 15:52:54.548628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.524 [2024-05-15 15:52:54.548814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.524 [2024-05-15 15:52:54.548843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.524 [2024-05-15 15:52:54.548861] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.524 [2024-05-15 15:52:54.549103] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.524 [2024-05-15 15:52:54.549358] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.524 [2024-05-15 15:52:54.549382] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.524 [2024-05-15 15:52:54.549398] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.524 [2024-05-15 15:52:54.553001] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.524 [2024-05-15 15:52:54.562167] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.524 [2024-05-15 15:52:54.562591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.524 [2024-05-15 15:52:54.562833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.524 [2024-05-15 15:52:54.562859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.524 [2024-05-15 15:52:54.562889] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.524 [2024-05-15 15:52:54.563143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.524 [2024-05-15 15:52:54.563398] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.525 [2024-05-15 15:52:54.563423] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.525 [2024-05-15 15:52:54.563439] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.525 [2024-05-15 15:52:54.567043] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.525 [2024-05-15 15:52:54.576185] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.525 [2024-05-15 15:52:54.576622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.525 [2024-05-15 15:52:54.576783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.525 [2024-05-15 15:52:54.576809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.525 [2024-05-15 15:52:54.576840] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.525 [2024-05-15 15:52:54.577089] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.525 [2024-05-15 15:52:54.577347] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.525 [2024-05-15 15:52:54.577372] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.525 [2024-05-15 15:52:54.577388] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.525 [2024-05-15 15:52:54.580989] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.525 [2024-05-15 15:52:54.590139] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.525 [2024-05-15 15:52:54.590566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.525 [2024-05-15 15:52:54.590723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.525 [2024-05-15 15:52:54.590756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.525 [2024-05-15 15:52:54.590776] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.525 [2024-05-15 15:52:54.591018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.525 [2024-05-15 15:52:54.591274] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.525 [2024-05-15 15:52:54.591298] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.525 [2024-05-15 15:52:54.591314] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.525 [2024-05-15 15:52:54.594917] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.525 [2024-05-15 15:52:54.604056] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.525 [2024-05-15 15:52:54.604486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.525 [2024-05-15 15:52:54.604669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.525 [2024-05-15 15:52:54.604698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.525 [2024-05-15 15:52:54.604716] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.525 [2024-05-15 15:52:54.604957] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.525 [2024-05-15 15:52:54.605202] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.525 [2024-05-15 15:52:54.605236] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.525 [2024-05-15 15:52:54.605253] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.525 [2024-05-15 15:52:54.608854] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.525 [2024-05-15 15:52:54.618001] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.525 [2024-05-15 15:52:54.618402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.525 [2024-05-15 15:52:54.618591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.525 [2024-05-15 15:52:54.618618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.525 [2024-05-15 15:52:54.618635] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.525 [2024-05-15 15:52:54.618906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.525 [2024-05-15 15:52:54.619152] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.525 [2024-05-15 15:52:54.619176] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.525 [2024-05-15 15:52:54.619193] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.525 [2024-05-15 15:52:54.622832] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.784 [2024-05-15 15:52:54.632055] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.784 [2024-05-15 15:52:54.632477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-05-15 15:52:54.632719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-05-15 15:52:54.632749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.784 [2024-05-15 15:52:54.632773] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.784 [2024-05-15 15:52:54.633016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.784 [2024-05-15 15:52:54.633270] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.784 [2024-05-15 15:52:54.633295] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.784 [2024-05-15 15:52:54.633312] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.784 [2024-05-15 15:52:54.636913] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.784 [2024-05-15 15:52:54.646052] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.784 [2024-05-15 15:52:54.646465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-05-15 15:52:54.646648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-05-15 15:52:54.646677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.784 [2024-05-15 15:52:54.646694] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.784 [2024-05-15 15:52:54.646936] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.784 [2024-05-15 15:52:54.647181] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.784 [2024-05-15 15:52:54.647223] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.784 [2024-05-15 15:52:54.647241] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.784 [2024-05-15 15:52:54.650854] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.784 [2024-05-15 15:52:54.660009] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.784 [2024-05-15 15:52:54.660411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-05-15 15:52:54.660609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-05-15 15:52:54.660636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.784 [2024-05-15 15:52:54.660651] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.784 [2024-05-15 15:52:54.660919] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.784 [2024-05-15 15:52:54.661164] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.784 [2024-05-15 15:52:54.661188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.784 [2024-05-15 15:52:54.661204] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.784 [2024-05-15 15:52:54.664816] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.784 [2024-05-15 15:52:54.673990] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.784 [2024-05-15 15:52:54.674416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-05-15 15:52:54.674650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-05-15 15:52:54.674679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.784 [2024-05-15 15:52:54.674697] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.784 [2024-05-15 15:52:54.674944] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.784 [2024-05-15 15:52:54.675189] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.784 [2024-05-15 15:52:54.675223] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.784 [2024-05-15 15:52:54.675241] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.784 [2024-05-15 15:52:54.678845] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.784 [2024-05-15 15:52:54.687867] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.784 [2024-05-15 15:52:54.688300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-05-15 15:52:54.688431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-05-15 15:52:54.688457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.784 [2024-05-15 15:52:54.688474] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.784 [2024-05-15 15:52:54.688729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.784 [2024-05-15 15:52:54.688975] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.784 [2024-05-15 15:52:54.688998] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.784 [2024-05-15 15:52:54.689015] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.784 [2024-05-15 15:52:54.692623] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.784 [2024-05-15 15:52:54.701604] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.784 [2024-05-15 15:52:54.701968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-05-15 15:52:54.702126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-05-15 15:52:54.702153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.784 [2024-05-15 15:52:54.702170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.784 [2024-05-15 15:52:54.702426] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.784 [2024-05-15 15:52:54.702676] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.784 [2024-05-15 15:52:54.702699] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.785 [2024-05-15 15:52:54.702714] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.785 [2024-05-15 15:52:54.706048] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.785 [2024-05-15 15:52:54.715041] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.785 [2024-05-15 15:52:54.715422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-05-15 15:52:54.715581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-05-15 15:52:54.715607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.785 [2024-05-15 15:52:54.715624] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.785 [2024-05-15 15:52:54.715879] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.785 [2024-05-15 15:52:54.716086] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.785 [2024-05-15 15:52:54.716106] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.785 [2024-05-15 15:52:54.716119] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.785 [2024-05-15 15:52:54.719291] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.785 [2024-05-15 15:52:54.728911] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.785 [2024-05-15 15:52:54.729329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-05-15 15:52:54.729496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-05-15 15:52:54.729537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.785 [2024-05-15 15:52:54.729553] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.785 [2024-05-15 15:52:54.729807] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.785 [2024-05-15 15:52:54.730053] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.785 [2024-05-15 15:52:54.730077] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.785 [2024-05-15 15:52:54.730093] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.785 [2024-05-15 15:52:54.733550] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.785 [2024-05-15 15:52:54.742825] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.785 [2024-05-15 15:52:54.743269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-05-15 15:52:54.743434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-05-15 15:52:54.743461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.785 [2024-05-15 15:52:54.743477] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.785 [2024-05-15 15:52:54.743728] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.785 [2024-05-15 15:52:54.743974] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.785 [2024-05-15 15:52:54.743997] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.785 [2024-05-15 15:52:54.744013] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.785 [2024-05-15 15:52:54.747600] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.785 [2024-05-15 15:52:54.756837] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.785 [2024-05-15 15:52:54.757271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-05-15 15:52:54.757396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-05-15 15:52:54.757423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.785 [2024-05-15 15:52:54.757439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.785 [2024-05-15 15:52:54.757701] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.785 [2024-05-15 15:52:54.757947] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.785 [2024-05-15 15:52:54.757976] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.785 [2024-05-15 15:52:54.757993] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.785 [2024-05-15 15:52:54.761619] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.785 [2024-05-15 15:52:54.770765] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.785 [2024-05-15 15:52:54.771173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-05-15 15:52:54.771349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-05-15 15:52:54.771380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.785 [2024-05-15 15:52:54.771398] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.785 [2024-05-15 15:52:54.771640] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.785 [2024-05-15 15:52:54.771885] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.785 [2024-05-15 15:52:54.771909] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.785 [2024-05-15 15:52:54.771925] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.785 [2024-05-15 15:52:54.775535] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.785 [2024-05-15 15:52:54.784691] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.785 [2024-05-15 15:52:54.785125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-05-15 15:52:54.785306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-05-15 15:52:54.785351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.785 [2024-05-15 15:52:54.785370] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.785 [2024-05-15 15:52:54.785612] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.785 [2024-05-15 15:52:54.785857] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.785 [2024-05-15 15:52:54.785881] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.785 [2024-05-15 15:52:54.785897] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.785 [2024-05-15 15:52:54.789507] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.785 [2024-05-15 15:52:54.798664] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.785 [2024-05-15 15:52:54.799085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-05-15 15:52:54.799270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-05-15 15:52:54.799300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.785 [2024-05-15 15:52:54.799318] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.785 [2024-05-15 15:52:54.799560] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.785 [2024-05-15 15:52:54.799804] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.785 [2024-05-15 15:52:54.799828] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.785 [2024-05-15 15:52:54.799849] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.785 [2024-05-15 15:52:54.803465] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.785 [2024-05-15 15:52:54.812612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.785 [2024-05-15 15:52:54.813024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-05-15 15:52:54.813213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-05-15 15:52:54.813248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.785 [2024-05-15 15:52:54.813267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.785 [2024-05-15 15:52:54.813507] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.785 [2024-05-15 15:52:54.813753] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.785 [2024-05-15 15:52:54.813776] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.785 [2024-05-15 15:52:54.813792] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.785 [2024-05-15 15:52:54.817417] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.785 [2024-05-15 15:52:54.826554] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.785 [2024-05-15 15:52:54.826968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-05-15 15:52:54.827156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-05-15 15:52:54.827185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.785 [2024-05-15 15:52:54.827203] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.785 [2024-05-15 15:52:54.827462] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.785 [2024-05-15 15:52:54.827707] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.785 [2024-05-15 15:52:54.827731] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.785 [2024-05-15 15:52:54.827747] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.785 [2024-05-15 15:52:54.831354] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.785 [2024-05-15 15:52:54.840494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.785 [2024-05-15 15:52:54.840910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-05-15 15:52:54.841104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-05-15 15:52:54.841133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.785 [2024-05-15 15:52:54.841151] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.786 [2024-05-15 15:52:54.841403] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.786 [2024-05-15 15:52:54.841648] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.786 [2024-05-15 15:52:54.841672] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.786 [2024-05-15 15:52:54.841687] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.786 [2024-05-15 15:52:54.845303] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.786 [2024-05-15 15:52:54.854441] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.786 [2024-05-15 15:52:54.854874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-05-15 15:52:54.855030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-05-15 15:52:54.855059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.786 [2024-05-15 15:52:54.855077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.786 [2024-05-15 15:52:54.855330] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.786 [2024-05-15 15:52:54.855575] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.786 [2024-05-15 15:52:54.855598] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.786 [2024-05-15 15:52:54.855614] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.786 [2024-05-15 15:52:54.859213] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.786 [2024-05-15 15:52:54.868361] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.786 [2024-05-15 15:52:54.868799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-05-15 15:52:54.868974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-05-15 15:52:54.869017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.786 [2024-05-15 15:52:54.869035] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.786 [2024-05-15 15:52:54.869288] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.786 [2024-05-15 15:52:54.869534] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.786 [2024-05-15 15:52:54.869557] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.786 [2024-05-15 15:52:54.869573] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:41.786 [2024-05-15 15:52:54.873182] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.786 [2024-05-15 15:52:54.882369] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.786 [2024-05-15 15:52:54.882782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-05-15 15:52:54.882940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-05-15 15:52:54.882969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:41.786 [2024-05-15 15:52:54.882987] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:41.786 [2024-05-15 15:52:54.883239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:41.786 [2024-05-15 15:52:54.883494] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:41.786 [2024-05-15 15:52:54.883521] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:41.786 [2024-05-15 15:52:54.883538] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.045 [2024-05-15 15:52:54.887169] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.045 [2024-05-15 15:52:54.896361] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.045 [2024-05-15 15:52:54.896844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.045 [2024-05-15 15:52:54.897008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.045 [2024-05-15 15:52:54.897036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.045 [2024-05-15 15:52:54.897054] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.045 [2024-05-15 15:52:54.897309] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.045 [2024-05-15 15:52:54.897554] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.045 [2024-05-15 15:52:54.897579] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.045 [2024-05-15 15:52:54.897596] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.045 [2024-05-15 15:52:54.901199] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.045 [2024-05-15 15:52:54.910356] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.045 [2024-05-15 15:52:54.910855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.045 [2024-05-15 15:52:54.910998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.045 [2024-05-15 15:52:54.911024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.045 [2024-05-15 15:52:54.911041] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.045 [2024-05-15 15:52:54.911299] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.045 [2024-05-15 15:52:54.911545] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.045 [2024-05-15 15:52:54.911570] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.045 [2024-05-15 15:52:54.911587] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.045 [2024-05-15 15:52:54.915192] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.045 [2024-05-15 15:52:54.924347] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.045 [2024-05-15 15:52:54.924839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.045 [2024-05-15 15:52:54.925033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.045 [2024-05-15 15:52:54.925058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.045 [2024-05-15 15:52:54.925089] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.045 [2024-05-15 15:52:54.925361] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.045 [2024-05-15 15:52:54.925608] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.045 [2024-05-15 15:52:54.925633] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.045 [2024-05-15 15:52:54.925649] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.045 [2024-05-15 15:52:54.929266] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.045 [2024-05-15 15:52:54.938421] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.045 [2024-05-15 15:52:54.938856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.045 [2024-05-15 15:52:54.939013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.045 [2024-05-15 15:52:54.939038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.045 [2024-05-15 15:52:54.939055] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.045 [2024-05-15 15:52:54.939335] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.045 [2024-05-15 15:52:54.939582] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.045 [2024-05-15 15:52:54.939607] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.045 [2024-05-15 15:52:54.939624] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.045 [2024-05-15 15:52:54.943238] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.045 [2024-05-15 15:52:54.952384] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.045 [2024-05-15 15:52:54.952787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.045 [2024-05-15 15:52:54.952948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.045 [2024-05-15 15:52:54.952974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.046 [2024-05-15 15:52:54.952991] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.046 [2024-05-15 15:52:54.953258] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.046 [2024-05-15 15:52:54.953505] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.046 [2024-05-15 15:52:54.953530] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.046 [2024-05-15 15:52:54.953547] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.046 [2024-05-15 15:52:54.957150] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.046 [2024-05-15 15:52:54.966303] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.046 [2024-05-15 15:52:54.966695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.046 [2024-05-15 15:52:54.966876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.046 [2024-05-15 15:52:54.966906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.046 [2024-05-15 15:52:54.966924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.046 [2024-05-15 15:52:54.967166] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.046 [2024-05-15 15:52:54.967425] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.046 [2024-05-15 15:52:54.967451] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.046 [2024-05-15 15:52:54.967467] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.046 [2024-05-15 15:52:54.971071] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.046 [2024-05-15 15:52:54.980225] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.046 [2024-05-15 15:52:54.980659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.046 [2024-05-15 15:52:54.980824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.046 [2024-05-15 15:52:54.980854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.046 [2024-05-15 15:52:54.980872] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.046 [2024-05-15 15:52:54.981114] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.046 [2024-05-15 15:52:54.981373] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.046 [2024-05-15 15:52:54.981399] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.046 [2024-05-15 15:52:54.981415] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.046 [2024-05-15 15:52:54.985020] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.046 [2024-05-15 15:52:54.994189] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.046 [2024-05-15 15:52:54.994597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.046 [2024-05-15 15:52:54.994796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.046 [2024-05-15 15:52:54.994821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.046 [2024-05-15 15:52:54.994852] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.046 [2024-05-15 15:52:54.995105] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.046 [2024-05-15 15:52:54.995363] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.046 [2024-05-15 15:52:54.995389] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.046 [2024-05-15 15:52:54.995406] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.046 [2024-05-15 15:52:54.999011] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.046 [2024-05-15 15:52:55.008161] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.046 [2024-05-15 15:52:55.008592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.046 [2024-05-15 15:52:55.008750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.046 [2024-05-15 15:52:55.008776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.046 [2024-05-15 15:52:55.008792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.046 [2024-05-15 15:52:55.009050] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.046 [2024-05-15 15:52:55.009307] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.046 [2024-05-15 15:52:55.009332] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.046 [2024-05-15 15:52:55.009348] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.046 [2024-05-15 15:52:55.012955] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.046 [2024-05-15 15:52:55.022107] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.046 [2024-05-15 15:52:55.022532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.046 [2024-05-15 15:52:55.022745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.046 [2024-05-15 15:52:55.022771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.046 [2024-05-15 15:52:55.022792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.046 [2024-05-15 15:52:55.023058] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.046 [2024-05-15 15:52:55.023319] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.046 [2024-05-15 15:52:55.023346] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.046 [2024-05-15 15:52:55.023363] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.046 [2024-05-15 15:52:55.026968] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.046 [2024-05-15 15:52:55.036122] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.046 [2024-05-15 15:52:55.036555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.046 [2024-05-15 15:52:55.036711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.046 [2024-05-15 15:52:55.036753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.046 [2024-05-15 15:52:55.036771] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.046 [2024-05-15 15:52:55.037013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.046 [2024-05-15 15:52:55.037273] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.046 [2024-05-15 15:52:55.037297] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.046 [2024-05-15 15:52:55.037314] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.046 [2024-05-15 15:52:55.040917] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.046 [2024-05-15 15:52:55.050080] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.046 [2024-05-15 15:52:55.050482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.046 [2024-05-15 15:52:55.050643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.046 [2024-05-15 15:52:55.050673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.046 [2024-05-15 15:52:55.050691] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.046 [2024-05-15 15:52:55.050933] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.046 [2024-05-15 15:52:55.051179] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.046 [2024-05-15 15:52:55.051204] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.046 [2024-05-15 15:52:55.051236] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.046 [2024-05-15 15:52:55.054856] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.046 [2024-05-15 15:52:55.064008] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.046 [2024-05-15 15:52:55.064450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.046 [2024-05-15 15:52:55.064603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.046 [2024-05-15 15:52:55.064628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.046 [2024-05-15 15:52:55.064644] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.046 [2024-05-15 15:52:55.064907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.046 [2024-05-15 15:52:55.065153] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.046 [2024-05-15 15:52:55.065178] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.046 [2024-05-15 15:52:55.065195] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.046 [2024-05-15 15:52:55.068811] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.046 [2024-05-15 15:52:55.077960] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.046 [2024-05-15 15:52:55.078440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.046 [2024-05-15 15:52:55.078589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.046 [2024-05-15 15:52:55.078615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.046 [2024-05-15 15:52:55.078630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.046 [2024-05-15 15:52:55.078893] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.046 [2024-05-15 15:52:55.079140] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.046 [2024-05-15 15:52:55.079165] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.046 [2024-05-15 15:52:55.079182] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.046 [2024-05-15 15:52:55.082797] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.046 [2024-05-15 15:52:55.091946] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.047 [2024-05-15 15:52:55.092381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.047 [2024-05-15 15:52:55.092552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.047 [2024-05-15 15:52:55.092578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.047 [2024-05-15 15:52:55.092594] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.047 [2024-05-15 15:52:55.092832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.047 [2024-05-15 15:52:55.093080] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.047 [2024-05-15 15:52:55.093105] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.047 [2024-05-15 15:52:55.093122] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.047 [2024-05-15 15:52:55.096739] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.047 [2024-05-15 15:52:55.105883] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.047 [2024-05-15 15:52:55.106426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.047 [2024-05-15 15:52:55.106712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.047 [2024-05-15 15:52:55.106759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.047 [2024-05-15 15:52:55.106778] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.047 [2024-05-15 15:52:55.107019] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.047 [2024-05-15 15:52:55.107282] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.047 [2024-05-15 15:52:55.107308] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.047 [2024-05-15 15:52:55.107325] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.047 [2024-05-15 15:52:55.110929] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.047 [2024-05-15 15:52:55.119867] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.047 [2024-05-15 15:52:55.120261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.047 [2024-05-15 15:52:55.120442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.047 [2024-05-15 15:52:55.120470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.047 [2024-05-15 15:52:55.120488] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.047 [2024-05-15 15:52:55.120731] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.047 [2024-05-15 15:52:55.120977] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.047 [2024-05-15 15:52:55.121002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.047 [2024-05-15 15:52:55.121018] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.047 [2024-05-15 15:52:55.124634] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.047 [2024-05-15 15:52:55.133786] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.047 [2024-05-15 15:52:55.134200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.047 [2024-05-15 15:52:55.134397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.047 [2024-05-15 15:52:55.134439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.047 [2024-05-15 15:52:55.134455] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.047 [2024-05-15 15:52:55.134718] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.047 [2024-05-15 15:52:55.134964] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.047 [2024-05-15 15:52:55.134989] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.047 [2024-05-15 15:52:55.135006] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.047 [2024-05-15 15:52:55.138751] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.305 [2024-05-15 15:52:55.147764] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.305 [2024-05-15 15:52:55.148229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.305 [2024-05-15 15:52:55.148385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.305 [2024-05-15 15:52:55.148413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.305 [2024-05-15 15:52:55.148431] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.305 [2024-05-15 15:52:55.148672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.305 [2024-05-15 15:52:55.148916] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.305 [2024-05-15 15:52:55.148946] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.305 [2024-05-15 15:52:55.148963] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.305 [2024-05-15 15:52:55.152615] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.305 [2024-05-15 15:52:55.161772] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.305 [2024-05-15 15:52:55.162178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.305 [2024-05-15 15:52:55.162390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.305 [2024-05-15 15:52:55.162433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.305 [2024-05-15 15:52:55.162449] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.305 [2024-05-15 15:52:55.162704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.305 [2024-05-15 15:52:55.162950] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.305 [2024-05-15 15:52:55.162975] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.305 [2024-05-15 15:52:55.162992] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.305 [2024-05-15 15:52:55.166606] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.305 [2024-05-15 15:52:55.175768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.305 [2024-05-15 15:52:55.176210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.305 [2024-05-15 15:52:55.176353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.305 [2024-05-15 15:52:55.176378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.305 [2024-05-15 15:52:55.176394] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.305 [2024-05-15 15:52:55.176643] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.305 [2024-05-15 15:52:55.176887] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.305 [2024-05-15 15:52:55.176912] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.305 [2024-05-15 15:52:55.176929] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.305 [2024-05-15 15:52:55.180543] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.305 [2024-05-15 15:52:55.189693] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.305 [2024-05-15 15:52:55.190120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.305 [2024-05-15 15:52:55.190302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.305 [2024-05-15 15:52:55.190347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.305 [2024-05-15 15:52:55.190366] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.305 [2024-05-15 15:52:55.190607] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.305 [2024-05-15 15:52:55.190853] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.305 [2024-05-15 15:52:55.190878] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.305 [2024-05-15 15:52:55.190900] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.305 [2024-05-15 15:52:55.194519] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.305 [2024-05-15 15:52:55.203678] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.305 [2024-05-15 15:52:55.204113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.305 [2024-05-15 15:52:55.204265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.306 [2024-05-15 15:52:55.204291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.306 [2024-05-15 15:52:55.204323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.306 [2024-05-15 15:52:55.204566] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.306 [2024-05-15 15:52:55.204812] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.306 [2024-05-15 15:52:55.204837] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.306 [2024-05-15 15:52:55.204854] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.306 [2024-05-15 15:52:55.208469] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.306 [2024-05-15 15:52:55.217623] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.306 [2024-05-15 15:52:55.218052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.306 [2024-05-15 15:52:55.218209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.306 [2024-05-15 15:52:55.218249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.306 [2024-05-15 15:52:55.218267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.306 [2024-05-15 15:52:55.218508] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.306 [2024-05-15 15:52:55.218753] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.306 [2024-05-15 15:52:55.218777] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.306 [2024-05-15 15:52:55.218793] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.306 [2024-05-15 15:52:55.222406] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.306 [2024-05-15 15:52:55.231580] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.306 [2024-05-15 15:52:55.232007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.306 [2024-05-15 15:52:55.232260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.306 [2024-05-15 15:52:55.232290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.306 [2024-05-15 15:52:55.232309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.306 [2024-05-15 15:52:55.232552] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.306 [2024-05-15 15:52:55.232798] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.306 [2024-05-15 15:52:55.232822] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.306 [2024-05-15 15:52:55.232838] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.306 [2024-05-15 15:52:55.236458] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.306 [2024-05-15 15:52:55.245614] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.306 [2024-05-15 15:52:55.246030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.306 [2024-05-15 15:52:55.246205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.306 [2024-05-15 15:52:55.246238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.306 [2024-05-15 15:52:55.246255] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.306 [2024-05-15 15:52:55.246505] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.306 [2024-05-15 15:52:55.246751] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.306 [2024-05-15 15:52:55.246776] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.306 [2024-05-15 15:52:55.246793] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.306 [2024-05-15 15:52:55.250403] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.306 [2024-05-15 15:52:55.259553] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.306 [2024-05-15 15:52:55.259969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.306 [2024-05-15 15:52:55.260174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.306 [2024-05-15 15:52:55.260199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.306 [2024-05-15 15:52:55.260223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.306 [2024-05-15 15:52:55.260491] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.306 [2024-05-15 15:52:55.260738] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.306 [2024-05-15 15:52:55.260763] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.306 [2024-05-15 15:52:55.260779] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.306 [2024-05-15 15:52:55.264393] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.306 [2024-05-15 15:52:55.273544] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.306 [2024-05-15 15:52:55.273961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.306 [2024-05-15 15:52:55.274094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.306 [2024-05-15 15:52:55.274122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.306 [2024-05-15 15:52:55.274139] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.306 [2024-05-15 15:52:55.274391] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.306 [2024-05-15 15:52:55.274637] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.306 [2024-05-15 15:52:55.274661] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.306 [2024-05-15 15:52:55.274678] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.306 [2024-05-15 15:52:55.278293] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.306 [2024-05-15 15:52:55.287453] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.306 [2024-05-15 15:52:55.287922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.306 [2024-05-15 15:52:55.288119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.306 [2024-05-15 15:52:55.288147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.306 [2024-05-15 15:52:55.288165] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.306 [2024-05-15 15:52:55.288417] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.306 [2024-05-15 15:52:55.288663] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.306 [2024-05-15 15:52:55.288686] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.306 [2024-05-15 15:52:55.288702] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.306 [2024-05-15 15:52:55.292314] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.306 [2024-05-15 15:52:55.301486] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.306 [2024-05-15 15:52:55.301957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.306 [2024-05-15 15:52:55.302129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.306 [2024-05-15 15:52:55.302169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.306 [2024-05-15 15:52:55.302188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.306 [2024-05-15 15:52:55.302437] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.306 [2024-05-15 15:52:55.302692] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.306 [2024-05-15 15:52:55.302718] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.306 [2024-05-15 15:52:55.302734] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.306 [2024-05-15 15:52:55.306353] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.306 [2024-05-15 15:52:55.315509] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.306 [2024-05-15 15:52:55.315922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.306 [2024-05-15 15:52:55.316106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.306 [2024-05-15 15:52:55.316136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.306 [2024-05-15 15:52:55.316155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.306 [2024-05-15 15:52:55.316409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.306 [2024-05-15 15:52:55.316655] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.306 [2024-05-15 15:52:55.316680] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.306 [2024-05-15 15:52:55.316697] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.306 [2024-05-15 15:52:55.320309] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.306 [2024-05-15 15:52:55.329475] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.306 [2024-05-15 15:52:55.329966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.306 [2024-05-15 15:52:55.330119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.306 [2024-05-15 15:52:55.330144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.306 [2024-05-15 15:52:55.330160] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.306 [2024-05-15 15:52:55.330435] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.306 [2024-05-15 15:52:55.330683] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.306 [2024-05-15 15:52:55.330708] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.306 [2024-05-15 15:52:55.330724] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.306 [2024-05-15 15:52:55.334340] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.307 [2024-05-15 15:52:55.343487] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.307 [2024-05-15 15:52:55.343914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.307 [2024-05-15 15:52:55.344071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.307 [2024-05-15 15:52:55.344097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.307 [2024-05-15 15:52:55.344113] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.307 [2024-05-15 15:52:55.344390] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.307 [2024-05-15 15:52:55.344638] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.307 [2024-05-15 15:52:55.344663] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.307 [2024-05-15 15:52:55.344679] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.307 [2024-05-15 15:52:55.348289] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.307 [2024-05-15 15:52:55.357437] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.307 [2024-05-15 15:52:55.357861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.307 [2024-05-15 15:52:55.358042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.307 [2024-05-15 15:52:55.358071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.307 [2024-05-15 15:52:55.358088] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.307 [2024-05-15 15:52:55.358342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.307 [2024-05-15 15:52:55.358588] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.307 [2024-05-15 15:52:55.358613] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.307 [2024-05-15 15:52:55.358629] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.307 [2024-05-15 15:52:55.362240] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.307 [2024-05-15 15:52:55.371403] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.307 [2024-05-15 15:52:55.371822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.307 [2024-05-15 15:52:55.371987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.307 [2024-05-15 15:52:55.372017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.307 [2024-05-15 15:52:55.372035] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.307 [2024-05-15 15:52:55.372289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.307 [2024-05-15 15:52:55.372535] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.307 [2024-05-15 15:52:55.372559] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.307 [2024-05-15 15:52:55.372576] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.307 [2024-05-15 15:52:55.376182] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.307 [2024-05-15 15:52:55.385334] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.307 [2024-05-15 15:52:55.385726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.307 [2024-05-15 15:52:55.385915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.307 [2024-05-15 15:52:55.385957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.307 [2024-05-15 15:52:55.385975] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.307 [2024-05-15 15:52:55.386231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.307 [2024-05-15 15:52:55.386484] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.307 [2024-05-15 15:52:55.386509] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.307 [2024-05-15 15:52:55.386525] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.307 [2024-05-15 15:52:55.390132] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.307 [2024-05-15 15:52:55.399297] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.307 [2024-05-15 15:52:55.399713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.307 [2024-05-15 15:52:55.399895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.307 [2024-05-15 15:52:55.399924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.307 [2024-05-15 15:52:55.399942] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.307 [2024-05-15 15:52:55.400184] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.307 [2024-05-15 15:52:55.400439] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.307 [2024-05-15 15:52:55.400474] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.307 [2024-05-15 15:52:55.400490] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.307 [2024-05-15 15:52:55.404138] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.566 [2024-05-15 15:52:55.413370] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.566 [2024-05-15 15:52:55.413787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.566 [2024-05-15 15:52:55.413955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.566 [2024-05-15 15:52:55.413996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.566 [2024-05-15 15:52:55.414017] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.566 [2024-05-15 15:52:55.414289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.566 [2024-05-15 15:52:55.414535] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.566 [2024-05-15 15:52:55.414560] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.566 [2024-05-15 15:52:55.414576] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.566 [2024-05-15 15:52:55.418182] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.566 [2024-05-15 15:52:55.427347] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.566 [2024-05-15 15:52:55.427817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.566 [2024-05-15 15:52:55.428019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.566 [2024-05-15 15:52:55.428067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.566 [2024-05-15 15:52:55.428085] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.566 [2024-05-15 15:52:55.428343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.566 [2024-05-15 15:52:55.428589] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.566 [2024-05-15 15:52:55.428613] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.566 [2024-05-15 15:52:55.428629] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.566 [2024-05-15 15:52:55.432243] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.566 [2024-05-15 15:52:55.441411] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.566 [2024-05-15 15:52:55.441948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.566 [2024-05-15 15:52:55.442128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.566 [2024-05-15 15:52:55.442155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.566 [2024-05-15 15:52:55.442173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.566 [2024-05-15 15:52:55.442423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.566 [2024-05-15 15:52:55.442669] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.566 [2024-05-15 15:52:55.442694] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.566 [2024-05-15 15:52:55.442710] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.566 [2024-05-15 15:52:55.446319] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.566 [2024-05-15 15:52:55.455480] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.566 [2024-05-15 15:52:55.455971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.566 [2024-05-15 15:52:55.456170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.566 [2024-05-15 15:52:55.456199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.566 [2024-05-15 15:52:55.456245] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.566 [2024-05-15 15:52:55.456505] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.566 [2024-05-15 15:52:55.456750] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.566 [2024-05-15 15:52:55.456774] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.566 [2024-05-15 15:52:55.456790] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.566 [2024-05-15 15:52:55.460403] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.566 [2024-05-15 15:52:55.469562] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.566 [2024-05-15 15:52:55.469959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.566 [2024-05-15 15:52:55.470112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.566 [2024-05-15 15:52:55.470147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.566 [2024-05-15 15:52:55.470165] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.566 [2024-05-15 15:52:55.470418] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.566 [2024-05-15 15:52:55.470664] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.566 [2024-05-15 15:52:55.470688] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.566 [2024-05-15 15:52:55.470705] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.566 [2024-05-15 15:52:55.474319] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.566 [2024-05-15 15:52:55.483472] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.566 [2024-05-15 15:52:55.483987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.566 [2024-05-15 15:52:55.484149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.566 [2024-05-15 15:52:55.484178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.566 [2024-05-15 15:52:55.484197] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.566 [2024-05-15 15:52:55.484450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.566 [2024-05-15 15:52:55.484696] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.566 [2024-05-15 15:52:55.484720] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.566 [2024-05-15 15:52:55.484736] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.566 [2024-05-15 15:52:55.488347] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.566 [2024-05-15 15:52:55.497499] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.566 [2024-05-15 15:52:55.497925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.566 [2024-05-15 15:52:55.498138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.566 [2024-05-15 15:52:55.498185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.566 [2024-05-15 15:52:55.498203] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.566 [2024-05-15 15:52:55.498453] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.566 [2024-05-15 15:52:55.498708] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.566 [2024-05-15 15:52:55.498733] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.566 [2024-05-15 15:52:55.498749] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.566 [2024-05-15 15:52:55.502360] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.566 [2024-05-15 15:52:55.511505] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.566 [2024-05-15 15:52:55.511999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.566 [2024-05-15 15:52:55.512179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.567 [2024-05-15 15:52:55.512207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.567 [2024-05-15 15:52:55.512236] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.567 [2024-05-15 15:52:55.512479] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.567 [2024-05-15 15:52:55.512725] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.567 [2024-05-15 15:52:55.512748] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.567 [2024-05-15 15:52:55.512765] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.567 [2024-05-15 15:52:55.516376] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.567 [2024-05-15 15:52:55.525520] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.567 [2024-05-15 15:52:55.525944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.567 [2024-05-15 15:52:55.526081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.567 [2024-05-15 15:52:55.526112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.567 [2024-05-15 15:52:55.526130] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.567 [2024-05-15 15:52:55.526385] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.567 [2024-05-15 15:52:55.526632] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.567 [2024-05-15 15:52:55.526657] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.567 [2024-05-15 15:52:55.526674] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.567 [2024-05-15 15:52:55.530292] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.567 [2024-05-15 15:52:55.539440] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.567 [2024-05-15 15:52:55.539864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.567 [2024-05-15 15:52:55.540056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.567 [2024-05-15 15:52:55.540081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.567 [2024-05-15 15:52:55.540097] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.567 [2024-05-15 15:52:55.540383] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.567 [2024-05-15 15:52:55.540631] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.567 [2024-05-15 15:52:55.540660] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.567 [2024-05-15 15:52:55.540677] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.567 [2024-05-15 15:52:55.544295] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.567 [2024-05-15 15:52:55.553474] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.567 [2024-05-15 15:52:55.553967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.567 [2024-05-15 15:52:55.554145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.567 [2024-05-15 15:52:55.554173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.567 [2024-05-15 15:52:55.554191] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.567 [2024-05-15 15:52:55.554442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.567 [2024-05-15 15:52:55.554699] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.567 [2024-05-15 15:52:55.554724] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.567 [2024-05-15 15:52:55.554741] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.567 [2024-05-15 15:52:55.558361] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.567 [2024-05-15 15:52:55.567509] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.567 [2024-05-15 15:52:55.568001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.567 [2024-05-15 15:52:55.568138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.567 [2024-05-15 15:52:55.568179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.567 [2024-05-15 15:52:55.568195] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.567 [2024-05-15 15:52:55.568479] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.567 [2024-05-15 15:52:55.568725] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.567 [2024-05-15 15:52:55.568747] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.567 [2024-05-15 15:52:55.568762] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.567 [2024-05-15 15:52:55.572380] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.567 [2024-05-15 15:52:55.581536] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.567 [2024-05-15 15:52:55.581929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.567 [2024-05-15 15:52:55.582132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.567 [2024-05-15 15:52:55.582157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.567 [2024-05-15 15:52:55.582188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.567 [2024-05-15 15:52:55.582467] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.567 [2024-05-15 15:52:55.582714] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.567 [2024-05-15 15:52:55.582739] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.567 [2024-05-15 15:52:55.582761] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.567 [2024-05-15 15:52:55.586374] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.567 [2024-05-15 15:52:55.595526] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.567 [2024-05-15 15:52:55.595995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.567 [2024-05-15 15:52:55.596177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.567 [2024-05-15 15:52:55.596204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.567 [2024-05-15 15:52:55.596231] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.567 [2024-05-15 15:52:55.596474] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.567 [2024-05-15 15:52:55.596720] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.567 [2024-05-15 15:52:55.596744] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.567 [2024-05-15 15:52:55.596760] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.567 [2024-05-15 15:52:55.600377] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.567 [2024-05-15 15:52:55.609534] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.567 [2024-05-15 15:52:55.609960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.567 [2024-05-15 15:52:55.610122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.567 [2024-05-15 15:52:55.610150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.567 [2024-05-15 15:52:55.610168] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.567 [2024-05-15 15:52:55.610422] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.567 [2024-05-15 15:52:55.610667] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.567 [2024-05-15 15:52:55.610691] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.567 [2024-05-15 15:52:55.610706] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.567 [2024-05-15 15:52:55.614318] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.567 [2024-05-15 15:52:55.623467] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.567 [2024-05-15 15:52:55.623878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.567 [2024-05-15 15:52:55.624068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.567 [2024-05-15 15:52:55.624096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.567 [2024-05-15 15:52:55.624113] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.567 [2024-05-15 15:52:55.624369] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.567 [2024-05-15 15:52:55.624613] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.567 [2024-05-15 15:52:55.624638] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.567 [2024-05-15 15:52:55.624654] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.567 [2024-05-15 15:52:55.628275] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.567 [2024-05-15 15:52:55.637423] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.567 [2024-05-15 15:52:55.637845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.567 [2024-05-15 15:52:55.638015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.567 [2024-05-15 15:52:55.638058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.567 [2024-05-15 15:52:55.638076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.567 [2024-05-15 15:52:55.638331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.567 [2024-05-15 15:52:55.638578] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.567 [2024-05-15 15:52:55.638602] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.567 [2024-05-15 15:52:55.638619] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.567 [2024-05-15 15:52:55.642230] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.568 [2024-05-15 15:52:55.651386] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.568 [2024-05-15 15:52:55.651812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.568 [2024-05-15 15:52:55.652022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.568 [2024-05-15 15:52:55.652047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.568 [2024-05-15 15:52:55.652062] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.568 [2024-05-15 15:52:55.652330] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.568 [2024-05-15 15:52:55.652577] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.568 [2024-05-15 15:52:55.652602] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.568 [2024-05-15 15:52:55.652618] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.568 [2024-05-15 15:52:55.656229] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.568 [2024-05-15 15:52:55.665401] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.568 [2024-05-15 15:52:55.665828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.568 [2024-05-15 15:52:55.666128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.568 [2024-05-15 15:52:55.666190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.568 [2024-05-15 15:52:55.666212] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.568 [2024-05-15 15:52:55.666468] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.827 [2024-05-15 15:52:55.666714] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.827 [2024-05-15 15:52:55.666739] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.827 [2024-05-15 15:52:55.666756] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.827 [2024-05-15 15:52:55.670389] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.827 [2024-05-15 15:52:55.679363] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.827 [2024-05-15 15:52:55.679760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.827 [2024-05-15 15:52:55.679940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.827 [2024-05-15 15:52:55.679969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.827 [2024-05-15 15:52:55.679986] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.827 [2024-05-15 15:52:55.680243] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.827 [2024-05-15 15:52:55.680488] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.827 [2024-05-15 15:52:55.680512] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.827 [2024-05-15 15:52:55.680529] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.827 [2024-05-15 15:52:55.684130] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.827 [2024-05-15 15:52:55.693288] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.827 [2024-05-15 15:52:55.693712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.827 [2024-05-15 15:52:55.693851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.827 [2024-05-15 15:52:55.693878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.827 [2024-05-15 15:52:55.693896] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.827 [2024-05-15 15:52:55.694138] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.827 [2024-05-15 15:52:55.694394] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.827 [2024-05-15 15:52:55.694421] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.827 [2024-05-15 15:52:55.694437] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.827 [2024-05-15 15:52:55.698041] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.827 [2024-05-15 15:52:55.707191] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.827 [2024-05-15 15:52:55.707683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.827 [2024-05-15 15:52:55.707831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.827 [2024-05-15 15:52:55.707856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.827 [2024-05-15 15:52:55.707872] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.827 [2024-05-15 15:52:55.708127] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.827 [2024-05-15 15:52:55.708385] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.827 [2024-05-15 15:52:55.708410] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.827 [2024-05-15 15:52:55.708427] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.827 [2024-05-15 15:52:55.712031] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.827 [2024-05-15 15:52:55.721193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.827 [2024-05-15 15:52:55.721634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.827 [2024-05-15 15:52:55.721792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.827 [2024-05-15 15:52:55.721821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.827 [2024-05-15 15:52:55.721840] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.828 [2024-05-15 15:52:55.722082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.828 [2024-05-15 15:52:55.722340] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.828 [2024-05-15 15:52:55.722366] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.828 [2024-05-15 15:52:55.722382] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.828 [2024-05-15 15:52:55.725988] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.828 [2024-05-15 15:52:55.735140] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.828 [2024-05-15 15:52:55.735566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.828 [2024-05-15 15:52:55.735725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.828 [2024-05-15 15:52:55.735754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.828 [2024-05-15 15:52:55.735773] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.828 [2024-05-15 15:52:55.736015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.828 [2024-05-15 15:52:55.736275] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.828 [2024-05-15 15:52:55.736301] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.828 [2024-05-15 15:52:55.736317] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.828 [2024-05-15 15:52:55.739920] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.828 [2024-05-15 15:52:55.749068] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.828 [2024-05-15 15:52:55.749479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.828 [2024-05-15 15:52:55.749620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.828 [2024-05-15 15:52:55.749647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.828 [2024-05-15 15:52:55.749665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.828 [2024-05-15 15:52:55.749906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.828 [2024-05-15 15:52:55.750152] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.828 [2024-05-15 15:52:55.750177] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.828 [2024-05-15 15:52:55.750194] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.828 [2024-05-15 15:52:55.753809] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.828 [2024-05-15 15:52:55.762966] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.828 [2024-05-15 15:52:55.763367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.828 [2024-05-15 15:52:55.763521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.828 [2024-05-15 15:52:55.763547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.828 [2024-05-15 15:52:55.763563] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.828 [2024-05-15 15:52:55.763828] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.828 [2024-05-15 15:52:55.764073] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.828 [2024-05-15 15:52:55.764097] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.828 [2024-05-15 15:52:55.764113] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.828 [2024-05-15 15:52:55.767726] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.828 [2024-05-15 15:52:55.776717] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.828 [2024-05-15 15:52:55.777071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.828 [2024-05-15 15:52:55.777243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.828 [2024-05-15 15:52:55.777269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.828 [2024-05-15 15:52:55.777285] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.828 [2024-05-15 15:52:55.777503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.828 [2024-05-15 15:52:55.777724] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.828 [2024-05-15 15:52:55.777747] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.828 [2024-05-15 15:52:55.777762] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.828 [2024-05-15 15:52:55.781001] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.828 [2024-05-15 15:52:55.790164] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.828 [2024-05-15 15:52:55.790548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.828 [2024-05-15 15:52:55.790699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.828 [2024-05-15 15:52:55.790732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.828 [2024-05-15 15:52:55.790749] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.828 [2024-05-15 15:52:55.791008] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.828 [2024-05-15 15:52:55.791279] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.828 [2024-05-15 15:52:55.791303] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.828 [2024-05-15 15:52:55.791317] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.828 [2024-05-15 15:52:55.794900] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.828 [2024-05-15 15:52:55.803866] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.828 [2024-05-15 15:52:55.804291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.828 [2024-05-15 15:52:55.804424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.828 [2024-05-15 15:52:55.804451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.828 [2024-05-15 15:52:55.804473] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.828 [2024-05-15 15:52:55.804727] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.828 [2024-05-15 15:52:55.804995] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.828 [2024-05-15 15:52:55.805018] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.828 [2024-05-15 15:52:55.805032] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.828 [2024-05-15 15:52:55.808608] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.828 [2024-05-15 15:52:55.817769] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.828 [2024-05-15 15:52:55.818198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.828 [2024-05-15 15:52:55.818384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.828 [2024-05-15 15:52:55.818410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.828 [2024-05-15 15:52:55.818426] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.828 [2024-05-15 15:52:55.818679] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.828 [2024-05-15 15:52:55.818925] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.828 [2024-05-15 15:52:55.818950] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.828 [2024-05-15 15:52:55.818966] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.828 [2024-05-15 15:52:55.822581] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.828 [2024-05-15 15:52:55.831685] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.828 [2024-05-15 15:52:55.832104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.828 [2024-05-15 15:52:55.832277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.829 [2024-05-15 15:52:55.832304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.829 [2024-05-15 15:52:55.832321] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.829 [2024-05-15 15:52:55.832575] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.829 [2024-05-15 15:52:55.832821] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.829 [2024-05-15 15:52:55.832846] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.829 [2024-05-15 15:52:55.832862] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.829 [2024-05-15 15:52:55.836458] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.829 [2024-05-15 15:52:55.845673] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.829 [2024-05-15 15:52:55.846088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.829 [2024-05-15 15:52:55.846213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.829 [2024-05-15 15:52:55.846265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.829 [2024-05-15 15:52:55.846283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.829 [2024-05-15 15:52:55.846521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.829 [2024-05-15 15:52:55.846768] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.829 [2024-05-15 15:52:55.846792] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.829 [2024-05-15 15:52:55.846808] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.829 [2024-05-15 15:52:55.850446] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.829 [2024-05-15 15:52:55.859563] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.829 [2024-05-15 15:52:55.860051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.829 [2024-05-15 15:52:55.860202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.829 [2024-05-15 15:52:55.860235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.829 [2024-05-15 15:52:55.860253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.829 [2024-05-15 15:52:55.860509] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.829 [2024-05-15 15:52:55.860754] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.829 [2024-05-15 15:52:55.860778] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.829 [2024-05-15 15:52:55.860795] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.829 [2024-05-15 15:52:55.864403] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.829 [2024-05-15 15:52:55.873548] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.829 [2024-05-15 15:52:55.873941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.829 [2024-05-15 15:52:55.874102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.829 [2024-05-15 15:52:55.874131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.829 [2024-05-15 15:52:55.874149] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.829 [2024-05-15 15:52:55.874402] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.829 [2024-05-15 15:52:55.874647] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.829 [2024-05-15 15:52:55.874671] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.829 [2024-05-15 15:52:55.874687] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.829 [2024-05-15 15:52:55.878308] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.829 [2024-05-15 15:52:55.887470] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.829 [2024-05-15 15:52:55.888011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.829 [2024-05-15 15:52:55.888221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.829 [2024-05-15 15:52:55.888250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.829 [2024-05-15 15:52:55.888269] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.829 [2024-05-15 15:52:55.888515] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.829 [2024-05-15 15:52:55.888761] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.829 [2024-05-15 15:52:55.888785] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.829 [2024-05-15 15:52:55.888801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.829 [2024-05-15 15:52:55.892409] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.829 [2024-05-15 15:52:55.901351] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.829 [2024-05-15 15:52:55.901781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.829 [2024-05-15 15:52:55.901963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.829 [2024-05-15 15:52:55.901992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.829 [2024-05-15 15:52:55.902010] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.829 [2024-05-15 15:52:55.902261] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.829 [2024-05-15 15:52:55.902506] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.829 [2024-05-15 15:52:55.902530] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.829 [2024-05-15 15:52:55.902546] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.829 [2024-05-15 15:52:55.906148] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.829 [2024-05-15 15:52:55.915323] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.829 [2024-05-15 15:52:55.915749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.829 [2024-05-15 15:52:55.915905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.829 [2024-05-15 15:52:55.915933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:42.829 [2024-05-15 15:52:55.915951] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:42.829 [2024-05-15 15:52:55.916192] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:42.829 [2024-05-15 15:52:55.916447] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.829 [2024-05-15 15:52:55.916472] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.829 [2024-05-15 15:52:55.916488] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.829 [2024-05-15 15:52:55.920088] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.089 [2024-05-15 15:52:55.929310] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.089 [2024-05-15 15:52:55.929842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-05-15 15:52:55.930177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-05-15 15:52:55.930243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.089 [2024-05-15 15:52:55.930262] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.089 [2024-05-15 15:52:55.930503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.089 [2024-05-15 15:52:55.930754] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.089 [2024-05-15 15:52:55.930778] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.089 [2024-05-15 15:52:55.930794] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.089 [2024-05-15 15:52:55.934434] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.089 [2024-05-15 15:52:55.943382] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.089 [2024-05-15 15:52:55.943858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-05-15 15:52:55.944051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-05-15 15:52:55.944093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.089 [2024-05-15 15:52:55.944112] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.089 [2024-05-15 15:52:55.944364] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.089 [2024-05-15 15:52:55.944610] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.089 [2024-05-15 15:52:55.944633] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.089 [2024-05-15 15:52:55.944650] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.089 [2024-05-15 15:52:55.948259] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.089 [2024-05-15 15:52:55.957404] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.089 [2024-05-15 15:52:55.957870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-05-15 15:52:55.958067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-05-15 15:52:55.958109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.089 [2024-05-15 15:52:55.958128] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.089 [2024-05-15 15:52:55.958381] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.089 [2024-05-15 15:52:55.958627] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.089 [2024-05-15 15:52:55.958651] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.089 [2024-05-15 15:52:55.958667] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.089 [2024-05-15 15:52:55.962276] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.089 [2024-05-15 15:52:55.971409] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.089 [2024-05-15 15:52:55.971897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-05-15 15:52:55.972122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-05-15 15:52:55.972151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.089 [2024-05-15 15:52:55.972169] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.089 [2024-05-15 15:52:55.972419] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.089 [2024-05-15 15:52:55.972665] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.089 [2024-05-15 15:52:55.972688] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.089 [2024-05-15 15:52:55.972710] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.089 [2024-05-15 15:52:55.976321] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.089 [2024-05-15 15:52:55.985463] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.089 [2024-05-15 15:52:55.985993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-05-15 15:52:55.986165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-05-15 15:52:55.986193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.089 [2024-05-15 15:52:55.986211] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.089 [2024-05-15 15:52:55.986463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.089 [2024-05-15 15:52:55.986708] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.089 [2024-05-15 15:52:55.986732] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.089 [2024-05-15 15:52:55.986749] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.089 [2024-05-15 15:52:55.990358] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.089 [2024-05-15 15:52:55.999497] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.089 [2024-05-15 15:52:55.999981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-05-15 15:52:56.000187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-05-15 15:52:56.000224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.089 [2024-05-15 15:52:56.000245] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.089 [2024-05-15 15:52:56.000485] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.089 [2024-05-15 15:52:56.000731] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.089 [2024-05-15 15:52:56.000755] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.089 [2024-05-15 15:52:56.000771] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.089 [2024-05-15 15:52:56.004380] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.089 [2024-05-15 15:52:56.013517] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.090 [2024-05-15 15:52:56.013986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-05-15 15:52:56.014141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-05-15 15:52:56.014170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.090 [2024-05-15 15:52:56.014188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.090 [2024-05-15 15:52:56.014436] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.090 [2024-05-15 15:52:56.014681] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.090 [2024-05-15 15:52:56.014705] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.090 [2024-05-15 15:52:56.014726] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.090 [2024-05-15 15:52:56.018335] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.090 [2024-05-15 15:52:56.027482] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.090 [2024-05-15 15:52:56.027961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-05-15 15:52:56.028139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-05-15 15:52:56.028166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.090 [2024-05-15 15:52:56.028182] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.090 [2024-05-15 15:52:56.028470] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.090 [2024-05-15 15:52:56.028716] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.090 [2024-05-15 15:52:56.028740] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.090 [2024-05-15 15:52:56.028757] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.090 [2024-05-15 15:52:56.032399] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.090 [2024-05-15 15:52:56.041537] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.090 [2024-05-15 15:52:56.041936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-05-15 15:52:56.042190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-05-15 15:52:56.042223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.090 [2024-05-15 15:52:56.042240] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.090 [2024-05-15 15:52:56.042502] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.090 [2024-05-15 15:52:56.042747] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.090 [2024-05-15 15:52:56.042772] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.090 [2024-05-15 15:52:56.042788] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.090 [2024-05-15 15:52:56.046396] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.090 [2024-05-15 15:52:56.055543] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.090 [2024-05-15 15:52:56.056071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-05-15 15:52:56.056295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-05-15 15:52:56.056322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.090 [2024-05-15 15:52:56.056339] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.090 [2024-05-15 15:52:56.056592] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.090 [2024-05-15 15:52:56.056837] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.090 [2024-05-15 15:52:56.056861] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.090 [2024-05-15 15:52:56.056877] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.090 [2024-05-15 15:52:56.060486] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.090 [2024-05-15 15:52:56.069417] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.090 [2024-05-15 15:52:56.069809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-05-15 15:52:56.070005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-05-15 15:52:56.070034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.090 [2024-05-15 15:52:56.070052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.090 [2024-05-15 15:52:56.070305] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.090 [2024-05-15 15:52:56.070551] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.090 [2024-05-15 15:52:56.070576] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.090 [2024-05-15 15:52:56.070592] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.090 [2024-05-15 15:52:56.074195] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.090 [2024-05-15 15:52:56.083342] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.090 [2024-05-15 15:52:56.083757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-05-15 15:52:56.083921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-05-15 15:52:56.083948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.090 [2024-05-15 15:52:56.083965] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.090 [2024-05-15 15:52:56.084241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.090 [2024-05-15 15:52:56.084487] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.090 [2024-05-15 15:52:56.084511] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.090 [2024-05-15 15:52:56.084527] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.090 [2024-05-15 15:52:56.088129] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.090 [2024-05-15 15:52:56.097280] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.090 [2024-05-15 15:52:56.097669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-05-15 15:52:56.097848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-05-15 15:52:56.097877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.090 [2024-05-15 15:52:56.097895] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.090 [2024-05-15 15:52:56.098135] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.090 [2024-05-15 15:52:56.098391] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.090 [2024-05-15 15:52:56.098416] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.090 [2024-05-15 15:52:56.098432] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.090 [2024-05-15 15:52:56.102032] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.090 [2024-05-15 15:52:56.111166] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.090 [2024-05-15 15:52:56.111601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-05-15 15:52:56.111781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-05-15 15:52:56.111824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.090 [2024-05-15 15:52:56.111842] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.090 [2024-05-15 15:52:56.112083] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.090 [2024-05-15 15:52:56.112338] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.090 [2024-05-15 15:52:56.112363] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.090 [2024-05-15 15:52:56.112380] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.090 [2024-05-15 15:52:56.115979] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.090 [2024-05-15 15:52:56.125123] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.090 [2024-05-15 15:52:56.125554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-05-15 15:52:56.125736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-05-15 15:52:56.125765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.090 [2024-05-15 15:52:56.125782] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.090 [2024-05-15 15:52:56.126023] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.090 [2024-05-15 15:52:56.126280] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.090 [2024-05-15 15:52:56.126305] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.090 [2024-05-15 15:52:56.126321] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.090 [2024-05-15 15:52:56.129929] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.090 [2024-05-15 15:52:56.139081] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.090 [2024-05-15 15:52:56.139478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-05-15 15:52:56.139663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-05-15 15:52:56.139692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.090 [2024-05-15 15:52:56.139710] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.090 [2024-05-15 15:52:56.139952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.090 [2024-05-15 15:52:56.140197] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.091 [2024-05-15 15:52:56.140230] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.091 [2024-05-15 15:52:56.140248] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.091 [2024-05-15 15:52:56.143848] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.091 [2024-05-15 15:52:56.152991] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.091 [2024-05-15 15:52:56.153417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-05-15 15:52:56.153583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-05-15 15:52:56.153613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.091 [2024-05-15 15:52:56.153631] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.091 [2024-05-15 15:52:56.153873] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.091 [2024-05-15 15:52:56.154118] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.091 [2024-05-15 15:52:56.154143] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.091 [2024-05-15 15:52:56.154158] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.091 [2024-05-15 15:52:56.157771] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.091 [2024-05-15 15:52:56.167038] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.091 [2024-05-15 15:52:56.167491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-05-15 15:52:56.167711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-05-15 15:52:56.167741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.091 [2024-05-15 15:52:56.167759] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.091 [2024-05-15 15:52:56.168000] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.091 [2024-05-15 15:52:56.168256] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.091 [2024-05-15 15:52:56.168282] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.091 [2024-05-15 15:52:56.168298] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.091 [2024-05-15 15:52:56.171899] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.091 [2024-05-15 15:52:56.181042] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.091 [2024-05-15 15:52:56.181483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-05-15 15:52:56.181642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-05-15 15:52:56.181672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.091 [2024-05-15 15:52:56.181691] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.091 [2024-05-15 15:52:56.181932] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.091 [2024-05-15 15:52:56.182177] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.091 [2024-05-15 15:52:56.182201] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.091 [2024-05-15 15:52:56.182229] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.091 [2024-05-15 15:52:56.185850] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.350 [2024-05-15 15:52:56.195073] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.350 [2024-05-15 15:52:56.195539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.350 [2024-05-15 15:52:56.195713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.350 [2024-05-15 15:52:56.195743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.350 [2024-05-15 15:52:56.195767] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.350 [2024-05-15 15:52:56.196009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.350 [2024-05-15 15:52:56.196264] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.350 [2024-05-15 15:52:56.196290] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.350 [2024-05-15 15:52:56.196306] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.350 [2024-05-15 15:52:56.199907] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.350 [2024-05-15 15:52:56.209052] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.350 [2024-05-15 15:52:56.209587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.350 [2024-05-15 15:52:56.209803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.350 [2024-05-15 15:52:56.209833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.350 [2024-05-15 15:52:56.209851] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.350 [2024-05-15 15:52:56.210092] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.350 [2024-05-15 15:52:56.210349] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.350 [2024-05-15 15:52:56.210374] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.350 [2024-05-15 15:52:56.210390] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.350 [2024-05-15 15:52:56.213990] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.350 [2024-05-15 15:52:56.222923] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.350 [2024-05-15 15:52:56.223340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.350 [2024-05-15 15:52:56.223570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.350 [2024-05-15 15:52:56.223617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.350 [2024-05-15 15:52:56.223636] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.350 [2024-05-15 15:52:56.223878] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.350 [2024-05-15 15:52:56.224123] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.350 [2024-05-15 15:52:56.224148] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.350 [2024-05-15 15:52:56.224165] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.350 [2024-05-15 15:52:56.227777] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.350 [2024-05-15 15:52:56.236925] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.350 [2024-05-15 15:52:56.237332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.350 [2024-05-15 15:52:56.237500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.350 [2024-05-15 15:52:56.237530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.350 [2024-05-15 15:52:56.237554] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.350 [2024-05-15 15:52:56.237797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.350 [2024-05-15 15:52:56.238042] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.350 [2024-05-15 15:52:56.238067] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.350 [2024-05-15 15:52:56.238083] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.350 [2024-05-15 15:52:56.241694] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.350 [2024-05-15 15:52:56.250844] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.350 [2024-05-15 15:52:56.251328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.350 [2024-05-15 15:52:56.251493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.350 [2024-05-15 15:52:56.251523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.350 [2024-05-15 15:52:56.251541] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.350 [2024-05-15 15:52:56.251782] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.350 [2024-05-15 15:52:56.252027] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.350 [2024-05-15 15:52:56.252051] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.350 [2024-05-15 15:52:56.252067] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.350 [2024-05-15 15:52:56.255679] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.350 [2024-05-15 15:52:56.264826] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.350 [2024-05-15 15:52:56.265250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.350 [2024-05-15 15:52:56.265429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.350 [2024-05-15 15:52:56.265460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.350 [2024-05-15 15:52:56.265479] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.350 [2024-05-15 15:52:56.265720] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.350 [2024-05-15 15:52:56.265966] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.351 [2024-05-15 15:52:56.265990] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.351 [2024-05-15 15:52:56.266006] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.351 [2024-05-15 15:52:56.269617] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.351 [2024-05-15 15:52:56.278763] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.351 [2024-05-15 15:52:56.279164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.351 [2024-05-15 15:52:56.279336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.351 [2024-05-15 15:52:56.279367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.351 [2024-05-15 15:52:56.279385] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.351 [2024-05-15 15:52:56.279633] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.351 [2024-05-15 15:52:56.279878] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.351 [2024-05-15 15:52:56.279902] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.351 [2024-05-15 15:52:56.279918] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.351 [2024-05-15 15:52:56.283528] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.351 [2024-05-15 15:52:56.292677] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.351 [2024-05-15 15:52:56.293083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.351 [2024-05-15 15:52:56.293241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.351 [2024-05-15 15:52:56.293272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.351 [2024-05-15 15:52:56.293291] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.351 [2024-05-15 15:52:56.293533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.351 [2024-05-15 15:52:56.293778] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.351 [2024-05-15 15:52:56.293802] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.351 [2024-05-15 15:52:56.293818] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.351 [2024-05-15 15:52:56.297430] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.351 [2024-05-15 15:52:56.306581] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.351 [2024-05-15 15:52:56.306999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.351 [2024-05-15 15:52:56.307125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.351 [2024-05-15 15:52:56.307155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.351 [2024-05-15 15:52:56.307173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.351 [2024-05-15 15:52:56.307424] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.351 [2024-05-15 15:52:56.307669] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.351 [2024-05-15 15:52:56.307693] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.351 [2024-05-15 15:52:56.307709] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.351 [2024-05-15 15:52:56.311317] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.351 [2024-05-15 15:52:56.320460] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.351 [2024-05-15 15:52:56.320866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.351 [2024-05-15 15:52:56.321059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.351 [2024-05-15 15:52:56.321088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.351 [2024-05-15 15:52:56.321106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.351 [2024-05-15 15:52:56.321358] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.351 [2024-05-15 15:52:56.321610] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.351 [2024-05-15 15:52:56.321635] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.351 [2024-05-15 15:52:56.321652] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.351 [2024-05-15 15:52:56.325261] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.351 [2024-05-15 15:52:56.334410] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.351 [2024-05-15 15:52:56.334835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.351 [2024-05-15 15:52:56.334992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.351 [2024-05-15 15:52:56.335021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.351 [2024-05-15 15:52:56.335040] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.351 [2024-05-15 15:52:56.335292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.351 [2024-05-15 15:52:56.335550] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.351 [2024-05-15 15:52:56.335574] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.351 [2024-05-15 15:52:56.335590] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.351 [2024-05-15 15:52:56.339194] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.351 [2024-05-15 15:52:56.348347] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.351 [2024-05-15 15:52:56.348775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.351 [2024-05-15 15:52:56.348934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.351 [2024-05-15 15:52:56.348963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.351 [2024-05-15 15:52:56.348981] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.351 [2024-05-15 15:52:56.349233] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.351 [2024-05-15 15:52:56.349479] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.351 [2024-05-15 15:52:56.349503] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.351 [2024-05-15 15:52:56.349519] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.351 [2024-05-15 15:52:56.353298] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.351 [2024-05-15 15:52:56.362241] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.351 [2024-05-15 15:52:56.362670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.351 [2024-05-15 15:52:56.362855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.351 [2024-05-15 15:52:56.362884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.351 [2024-05-15 15:52:56.362902] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.351 [2024-05-15 15:52:56.363144] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.351 [2024-05-15 15:52:56.363401] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.351 [2024-05-15 15:52:56.363430] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.351 [2024-05-15 15:52:56.363447] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.351 [2024-05-15 15:52:56.367048] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.351 [2024-05-15 15:52:56.376193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.351 [2024-05-15 15:52:56.376616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.351 [2024-05-15 15:52:56.376830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.351 [2024-05-15 15:52:56.376860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.351 [2024-05-15 15:52:56.376878] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.351 [2024-05-15 15:52:56.377119] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.351 [2024-05-15 15:52:56.377376] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.351 [2024-05-15 15:52:56.377401] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.351 [2024-05-15 15:52:56.377417] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.351 [2024-05-15 15:52:56.381019] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.351 [2024-05-15 15:52:56.390160] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.351 [2024-05-15 15:52:56.390584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.351 [2024-05-15 15:52:56.390767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.351 [2024-05-15 15:52:56.390796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.351 [2024-05-15 15:52:56.390814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.351 [2024-05-15 15:52:56.391056] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.351 [2024-05-15 15:52:56.391312] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.351 [2024-05-15 15:52:56.391337] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.351 [2024-05-15 15:52:56.391353] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.351 [2024-05-15 15:52:56.394953] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.351 [2024-05-15 15:52:56.404095] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.351 [2024-05-15 15:52:56.404508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.351 [2024-05-15 15:52:56.404658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.351 [2024-05-15 15:52:56.404688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.352 [2024-05-15 15:52:56.404706] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.352 [2024-05-15 15:52:56.404947] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.352 [2024-05-15 15:52:56.405192] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.352 [2024-05-15 15:52:56.405225] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.352 [2024-05-15 15:52:56.405249] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.352 [2024-05-15 15:52:56.408855] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.352 [2024-05-15 15:52:56.417998] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.352 [2024-05-15 15:52:56.418395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.352 [2024-05-15 15:52:56.418658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.352 [2024-05-15 15:52:56.418687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.352 [2024-05-15 15:52:56.418705] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.352 [2024-05-15 15:52:56.418946] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.352 [2024-05-15 15:52:56.419191] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.352 [2024-05-15 15:52:56.419226] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.352 [2024-05-15 15:52:56.419245] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.352 [2024-05-15 15:52:56.422846] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.352 [2024-05-15 15:52:56.431991] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.352 [2024-05-15 15:52:56.432423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.352 [2024-05-15 15:52:56.432580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.352 [2024-05-15 15:52:56.432609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.352 [2024-05-15 15:52:56.432627] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.352 [2024-05-15 15:52:56.432868] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.352 [2024-05-15 15:52:56.433114] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.352 [2024-05-15 15:52:56.433138] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.352 [2024-05-15 15:52:56.433154] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.352 [2024-05-15 15:52:56.436766] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.352 [2024-05-15 15:52:56.445906] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.352 [2024-05-15 15:52:56.446331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.352 [2024-05-15 15:52:56.446490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.352 [2024-05-15 15:52:56.446521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.352 [2024-05-15 15:52:56.446543] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.352 [2024-05-15 15:52:56.446785] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.352 [2024-05-15 15:52:56.447034] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.352 [2024-05-15 15:52:56.447058] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.352 [2024-05-15 15:52:56.447074] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.611 [2024-05-15 15:52:56.450729] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.611 [2024-05-15 15:52:56.459922] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.611 [2024-05-15 15:52:56.460338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.611 [2024-05-15 15:52:56.460545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.611 [2024-05-15 15:52:56.460593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.611 [2024-05-15 15:52:56.460612] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.611 [2024-05-15 15:52:56.460853] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.611 [2024-05-15 15:52:56.461098] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.611 [2024-05-15 15:52:56.461122] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.611 [2024-05-15 15:52:56.461138] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.611 [2024-05-15 15:52:56.464749] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.611 [2024-05-15 15:52:56.473891] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.611 [2024-05-15 15:52:56.474286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.611 [2024-05-15 15:52:56.474443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.611 [2024-05-15 15:52:56.474473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.611 [2024-05-15 15:52:56.474490] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.611 [2024-05-15 15:52:56.474732] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.611 [2024-05-15 15:52:56.474977] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.611 [2024-05-15 15:52:56.475002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.611 [2024-05-15 15:52:56.475018] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.611 [2024-05-15 15:52:56.478630] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.611 [2024-05-15 15:52:56.487775] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.611 [2024-05-15 15:52:56.488192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.611 [2024-05-15 15:52:56.488358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.611 [2024-05-15 15:52:56.488387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.611 [2024-05-15 15:52:56.488405] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.611 [2024-05-15 15:52:56.488646] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.611 [2024-05-15 15:52:56.488892] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.611 [2024-05-15 15:52:56.488916] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.611 [2024-05-15 15:52:56.488932] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.611 [2024-05-15 15:52:56.492547] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.611 [2024-05-15 15:52:56.501692] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.611 [2024-05-15 15:52:56.502101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.611 [2024-05-15 15:52:56.502240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.611 [2024-05-15 15:52:56.502271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.611 [2024-05-15 15:52:56.502290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.611 [2024-05-15 15:52:56.502531] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.611 [2024-05-15 15:52:56.502777] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.611 [2024-05-15 15:52:56.502800] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.611 [2024-05-15 15:52:56.502816] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.611 [2024-05-15 15:52:56.506427] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.611 [2024-05-15 15:52:56.515568] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.611 [2024-05-15 15:52:56.515984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.611 [2024-05-15 15:52:56.516144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.611 [2024-05-15 15:52:56.516173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.611 [2024-05-15 15:52:56.516191] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.611 [2024-05-15 15:52:56.516442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.611 [2024-05-15 15:52:56.516688] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.611 [2024-05-15 15:52:56.516712] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.611 [2024-05-15 15:52:56.516728] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.611 [2024-05-15 15:52:56.520340] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.611 [2024-05-15 15:52:56.529498] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.611 [2024-05-15 15:52:56.529914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.611 [2024-05-15 15:52:56.530106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.611 [2024-05-15 15:52:56.530135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.611 [2024-05-15 15:52:56.530153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.611 [2024-05-15 15:52:56.530405] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.611 [2024-05-15 15:52:56.530652] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.611 [2024-05-15 15:52:56.530675] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.611 [2024-05-15 15:52:56.530692] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.611 [2024-05-15 15:52:56.534300] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.611 [2024-05-15 15:52:56.543470] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.611 [2024-05-15 15:52:56.543884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.611 [2024-05-15 15:52:56.544068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.611 [2024-05-15 15:52:56.544097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.611 [2024-05-15 15:52:56.544115] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.611 [2024-05-15 15:52:56.544365] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.611 [2024-05-15 15:52:56.544611] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.611 [2024-05-15 15:52:56.544636] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.611 [2024-05-15 15:52:56.544653] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.612 [2024-05-15 15:52:56.548282] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.612 [2024-05-15 15:52:56.557431] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.612 [2024-05-15 15:52:56.557852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.612 [2024-05-15 15:52:56.557994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.612 [2024-05-15 15:52:56.558023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.612 [2024-05-15 15:52:56.558041] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.612 [2024-05-15 15:52:56.558294] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.612 [2024-05-15 15:52:56.558540] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.612 [2024-05-15 15:52:56.558563] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.612 [2024-05-15 15:52:56.558580] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.612 [2024-05-15 15:52:56.562182] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.612 [2024-05-15 15:52:56.571339] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.612 [2024-05-15 15:52:56.571773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.612 [2024-05-15 15:52:56.571929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.612 [2024-05-15 15:52:56.571976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.612 [2024-05-15 15:52:56.571994] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.612 [2024-05-15 15:52:56.572247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.612 [2024-05-15 15:52:56.572492] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.612 [2024-05-15 15:52:56.572516] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.612 [2024-05-15 15:52:56.572532] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.612 [2024-05-15 15:52:56.576133] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.612 [2024-05-15 15:52:56.585288] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.612 [2024-05-15 15:52:56.585682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.612 [2024-05-15 15:52:56.585814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.612 [2024-05-15 15:52:56.585848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.612 [2024-05-15 15:52:56.585867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.612 [2024-05-15 15:52:56.586108] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.612 [2024-05-15 15:52:56.586363] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.612 [2024-05-15 15:52:56.586388] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.612 [2024-05-15 15:52:56.586404] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.612 [2024-05-15 15:52:56.590006] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.612 [2024-05-15 15:52:56.599366] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.612 [2024-05-15 15:52:56.599779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.612 [2024-05-15 15:52:56.599951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.612 [2024-05-15 15:52:56.599980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.612 [2024-05-15 15:52:56.599998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.612 [2024-05-15 15:52:56.600250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.612 [2024-05-15 15:52:56.600495] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.612 [2024-05-15 15:52:56.600519] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.612 [2024-05-15 15:52:56.600536] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.612 [2024-05-15 15:52:56.604139] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.612 [2024-05-15 15:52:56.613295] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.612 [2024-05-15 15:52:56.613688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.612 [2024-05-15 15:52:56.613869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.612 [2024-05-15 15:52:56.613916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.612 [2024-05-15 15:52:56.613935] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.612 [2024-05-15 15:52:56.614176] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.612 [2024-05-15 15:52:56.614430] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.612 [2024-05-15 15:52:56.614455] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.612 [2024-05-15 15:52:56.614470] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.612 [2024-05-15 15:52:56.618071] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.612 [2024-05-15 15:52:56.627227] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.612 [2024-05-15 15:52:56.627652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.612 [2024-05-15 15:52:56.627835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.612 [2024-05-15 15:52:56.627864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.612 [2024-05-15 15:52:56.627886] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.612 [2024-05-15 15:52:56.628128] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.612 [2024-05-15 15:52:56.628390] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.612 [2024-05-15 15:52:56.628415] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.612 [2024-05-15 15:52:56.628432] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.612 [2024-05-15 15:52:56.632036] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.612 [2024-05-15 15:52:56.641192] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.612 [2024-05-15 15:52:56.641641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.612 [2024-05-15 15:52:56.641956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.612 [2024-05-15 15:52:56.642004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.612 [2024-05-15 15:52:56.642022] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.612 [2024-05-15 15:52:56.642274] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.612 [2024-05-15 15:52:56.642520] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.612 [2024-05-15 15:52:56.642544] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.612 [2024-05-15 15:52:56.642560] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.612 [2024-05-15 15:52:56.646163] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.612 [2024-05-15 15:52:56.655101] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.612 [2024-05-15 15:52:56.655614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.612 [2024-05-15 15:52:56.655863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.612 [2024-05-15 15:52:56.655891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.612 [2024-05-15 15:52:56.655910] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.612 [2024-05-15 15:52:56.656150] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.612 [2024-05-15 15:52:56.656404] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.612 [2024-05-15 15:52:56.656429] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.612 [2024-05-15 15:52:56.656446] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.613 [2024-05-15 15:52:56.660051] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.613 [2024-05-15 15:52:56.668989] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.613 [2024-05-15 15:52:56.669402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.613 [2024-05-15 15:52:56.669569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.613 [2024-05-15 15:52:56.669599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.613 [2024-05-15 15:52:56.669617] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.613 [2024-05-15 15:52:56.669864] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.613 [2024-05-15 15:52:56.670110] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.613 [2024-05-15 15:52:56.670134] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.613 [2024-05-15 15:52:56.670150] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.613 [2024-05-15 15:52:56.673777] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.613 [2024-05-15 15:52:56.682926] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.613 [2024-05-15 15:52:56.683323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.613 [2024-05-15 15:52:56.683453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.613 [2024-05-15 15:52:56.683482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.613 [2024-05-15 15:52:56.683500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.613 [2024-05-15 15:52:56.683741] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.613 [2024-05-15 15:52:56.683986] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.613 [2024-05-15 15:52:56.684009] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.613 [2024-05-15 15:52:56.684026] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.613 [2024-05-15 15:52:56.687641] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.613 [2024-05-15 15:52:56.697000] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.613 [2024-05-15 15:52:56.697401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.613 [2024-05-15 15:52:56.697560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.613 [2024-05-15 15:52:56.697588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.613 [2024-05-15 15:52:56.697607] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.613 [2024-05-15 15:52:56.697848] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.613 [2024-05-15 15:52:56.698093] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.613 [2024-05-15 15:52:56.698117] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.613 [2024-05-15 15:52:56.698133] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.613 [2024-05-15 15:52:56.701745] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.613 [2024-05-15 15:52:56.710922] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.613 [2024-05-15 15:52:56.711349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-05-15 15:52:56.711487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-05-15 15:52:56.711516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.872 [2024-05-15 15:52:56.711535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.872 [2024-05-15 15:52:56.711796] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.872 [2024-05-15 15:52:56.712054] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.872 [2024-05-15 15:52:56.712079] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.872 [2024-05-15 15:52:56.712095] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.872 [2024-05-15 15:52:56.715709] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.872 [2024-05-15 15:52:56.724877] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.872 [2024-05-15 15:52:56.725290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-05-15 15:52:56.725428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-05-15 15:52:56.725457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.872 [2024-05-15 15:52:56.725475] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.872 [2024-05-15 15:52:56.725716] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.872 [2024-05-15 15:52:56.725961] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.872 [2024-05-15 15:52:56.725986] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.872 [2024-05-15 15:52:56.726002] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.872 [2024-05-15 15:52:56.729619] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.872 [2024-05-15 15:52:56.738765] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.872 [2024-05-15 15:52:56.739192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-05-15 15:52:56.739354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-05-15 15:52:56.739384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.872 [2024-05-15 15:52:56.739402] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.872 [2024-05-15 15:52:56.739643] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.872 [2024-05-15 15:52:56.739889] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.872 [2024-05-15 15:52:56.739913] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.872 [2024-05-15 15:52:56.739929] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.872 [2024-05-15 15:52:56.743541] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.872 [2024-05-15 15:52:56.752687] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.872 [2024-05-15 15:52:56.753113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-05-15 15:52:56.753271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-05-15 15:52:56.753301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.872 [2024-05-15 15:52:56.753319] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.872 [2024-05-15 15:52:56.753561] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.872 [2024-05-15 15:52:56.753807] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.872 [2024-05-15 15:52:56.753836] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.872 [2024-05-15 15:52:56.753853] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.872 [2024-05-15 15:52:56.757468] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.872 [2024-05-15 15:52:56.766621] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.872 [2024-05-15 15:52:56.767050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-05-15 15:52:56.767234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-05-15 15:52:56.767264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.872 [2024-05-15 15:52:56.767283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.872 [2024-05-15 15:52:56.767524] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.872 [2024-05-15 15:52:56.767769] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.872 [2024-05-15 15:52:56.767793] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.872 [2024-05-15 15:52:56.767809] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.872 [2024-05-15 15:52:56.771419] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.872 [2024-05-15 15:52:56.780568] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.872 [2024-05-15 15:52:56.780960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-05-15 15:52:56.781120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-05-15 15:52:56.781148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.872 [2024-05-15 15:52:56.781167] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.872 [2024-05-15 15:52:56.781419] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.872 [2024-05-15 15:52:56.781665] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.872 [2024-05-15 15:52:56.781689] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.872 [2024-05-15 15:52:56.781705] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.872 [2024-05-15 15:52:56.785316] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.872 [2024-05-15 15:52:56.794459] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.872 [2024-05-15 15:52:56.794864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-05-15 15:52:56.795042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-05-15 15:52:56.795071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.872 [2024-05-15 15:52:56.795089] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.872 [2024-05-15 15:52:56.795340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.872 [2024-05-15 15:52:56.795586] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.872 [2024-05-15 15:52:56.795610] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.872 [2024-05-15 15:52:56.795634] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.872 [2024-05-15 15:52:56.799244] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.872 [2024-05-15 15:52:56.808392] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.872 [2024-05-15 15:52:56.808924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-05-15 15:52:56.809136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-05-15 15:52:56.809165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.872 [2024-05-15 15:52:56.809183] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.872 [2024-05-15 15:52:56.809434] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.872 [2024-05-15 15:52:56.809680] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.872 [2024-05-15 15:52:56.809704] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.872 [2024-05-15 15:52:56.809720] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.872 [2024-05-15 15:52:56.813330] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.872 [2024-05-15 15:52:56.822266] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.872 [2024-05-15 15:52:56.822678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-05-15 15:52:56.822815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-05-15 15:52:56.822845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.872 [2024-05-15 15:52:56.822864] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.872 [2024-05-15 15:52:56.823106] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.872 [2024-05-15 15:52:56.823363] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.872 [2024-05-15 15:52:56.823388] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.872 [2024-05-15 15:52:56.823404] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.873 [2024-05-15 15:52:56.827007] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.873 [2024-05-15 15:52:56.836154] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.873 [2024-05-15 15:52:56.836556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-05-15 15:52:56.836714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-05-15 15:52:56.836743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.873 [2024-05-15 15:52:56.836761] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.873 [2024-05-15 15:52:56.837001] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.873 [2024-05-15 15:52:56.837258] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.873 [2024-05-15 15:52:56.837282] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.873 [2024-05-15 15:52:56.837298] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.873 [2024-05-15 15:52:56.840906] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.873 [2024-05-15 15:52:56.850053] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.873 [2024-05-15 15:52:56.850476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-05-15 15:52:56.850680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-05-15 15:52:56.850709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.873 [2024-05-15 15:52:56.850727] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.873 [2024-05-15 15:52:56.850968] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.873 [2024-05-15 15:52:56.851212] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.873 [2024-05-15 15:52:56.851246] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.873 [2024-05-15 15:52:56.851263] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.873 [2024-05-15 15:52:56.854866] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.873 [2024-05-15 15:52:56.864013] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.873 [2024-05-15 15:52:56.864444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-05-15 15:52:56.864604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-05-15 15:52:56.864633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.873 [2024-05-15 15:52:56.864651] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.873 [2024-05-15 15:52:56.864893] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.873 [2024-05-15 15:52:56.865138] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.873 [2024-05-15 15:52:56.865162] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.873 [2024-05-15 15:52:56.865178] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.873 [2024-05-15 15:52:56.868792] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.873 [2024-05-15 15:52:56.877933] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.873 [2024-05-15 15:52:56.878353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-05-15 15:52:56.878480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-05-15 15:52:56.878510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.873 [2024-05-15 15:52:56.878528] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.873 [2024-05-15 15:52:56.878769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.873 [2024-05-15 15:52:56.879014] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.873 [2024-05-15 15:52:56.879038] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.873 [2024-05-15 15:52:56.879054] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.873 [2024-05-15 15:52:56.882663] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.873 [2024-05-15 15:52:56.891805] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.873 [2024-05-15 15:52:56.892201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-05-15 15:52:56.892388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-05-15 15:52:56.892417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.873 [2024-05-15 15:52:56.892435] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.873 [2024-05-15 15:52:56.892676] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.873 [2024-05-15 15:52:56.892922] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.873 [2024-05-15 15:52:56.892946] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.873 [2024-05-15 15:52:56.892962] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.873 [2024-05-15 15:52:56.896574] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.873 [2024-05-15 15:52:56.905713] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.873 [2024-05-15 15:52:56.906120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-05-15 15:52:56.906308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-05-15 15:52:56.906338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.873 [2024-05-15 15:52:56.906356] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.873 [2024-05-15 15:52:56.906598] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.873 [2024-05-15 15:52:56.906843] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.873 [2024-05-15 15:52:56.906868] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.873 [2024-05-15 15:52:56.906884] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.873 [2024-05-15 15:52:56.910497] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.873 [2024-05-15 15:52:56.919645] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.873 [2024-05-15 15:52:56.920031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-05-15 15:52:56.920191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-05-15 15:52:56.920229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.873 [2024-05-15 15:52:56.920249] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.873 [2024-05-15 15:52:56.920490] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.873 [2024-05-15 15:52:56.920736] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.873 [2024-05-15 15:52:56.920760] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.873 [2024-05-15 15:52:56.920776] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.873 [2024-05-15 15:52:56.924387] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.873 [2024-05-15 15:52:56.933536] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.873 [2024-05-15 15:52:56.934039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-05-15 15:52:56.934230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-05-15 15:52:56.934260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.873 [2024-05-15 15:52:56.934278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.873 [2024-05-15 15:52:56.934519] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.873 [2024-05-15 15:52:56.934764] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.873 [2024-05-15 15:52:56.934789] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.873 [2024-05-15 15:52:56.934805] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.873 [2024-05-15 15:52:56.938418] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.873 [2024-05-15 15:52:56.947567] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.873 [2024-05-15 15:52:56.948042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-05-15 15:52:56.948210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-05-15 15:52:56.948247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.873 [2024-05-15 15:52:56.948267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.873 [2024-05-15 15:52:56.948508] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.873 [2024-05-15 15:52:56.948752] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.873 [2024-05-15 15:52:56.948776] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.873 [2024-05-15 15:52:56.948792] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.873 [2024-05-15 15:52:56.952401] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.873 [2024-05-15 15:52:56.961548] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:43.873 [2024-05-15 15:52:56.961971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-05-15 15:52:56.962126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-05-15 15:52:56.962155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:43.874 [2024-05-15 15:52:56.962173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:43.874 [2024-05-15 15:52:56.962422] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:43.874 [2024-05-15 15:52:56.962668] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.874 [2024-05-15 15:52:56.962691] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.874 [2024-05-15 15:52:56.962707] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.874 [2024-05-15 15:52:56.966321] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.132 [2024-05-15 15:52:56.975551] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.132 [2024-05-15 15:52:56.975975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-05-15 15:52:56.976175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-05-15 15:52:56.976229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.132 [2024-05-15 15:52:56.976251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.132 [2024-05-15 15:52:56.976499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.132 [2024-05-15 15:52:56.976749] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.132 [2024-05-15 15:52:56.976773] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.132 [2024-05-15 15:52:56.976789] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.132 [2024-05-15 15:52:56.980416] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.133 [2024-05-15 15:52:56.989586] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.133 [2024-05-15 15:52:56.990010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-05-15 15:52:56.990193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-05-15 15:52:56.990237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.133 [2024-05-15 15:52:56.990257] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.133 [2024-05-15 15:52:56.990499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.133 [2024-05-15 15:52:56.990743] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.133 [2024-05-15 15:52:56.990767] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.133 [2024-05-15 15:52:56.990783] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.133 [2024-05-15 15:52:56.994393] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.133 [2024-05-15 15:52:57.003570] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.133 [2024-05-15 15:52:57.004071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-05-15 15:52:57.004259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-05-15 15:52:57.004289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.133 [2024-05-15 15:52:57.004307] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.133 [2024-05-15 15:52:57.004548] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.133 [2024-05-15 15:52:57.004794] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.133 [2024-05-15 15:52:57.004817] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.133 [2024-05-15 15:52:57.004833] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.133 [2024-05-15 15:52:57.008448] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.133 [2024-05-15 15:52:57.017594] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.133 [2024-05-15 15:52:57.017986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-05-15 15:52:57.018161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-05-15 15:52:57.018191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.133 [2024-05-15 15:52:57.018236] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.133 [2024-05-15 15:52:57.018481] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.133 [2024-05-15 15:52:57.018737] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.133 [2024-05-15 15:52:57.018761] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.133 [2024-05-15 15:52:57.018777] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.133 [2024-05-15 15:52:57.022392] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.133 [2024-05-15 15:52:57.031556] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.133 [2024-05-15 15:52:57.031970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-05-15 15:52:57.032105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-05-15 15:52:57.032136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.133 [2024-05-15 15:52:57.032154] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.133 [2024-05-15 15:52:57.032406] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.133 [2024-05-15 15:52:57.032652] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.133 [2024-05-15 15:52:57.032676] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.133 [2024-05-15 15:52:57.032692] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.133 [2024-05-15 15:52:57.036304] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.133 [2024-05-15 15:52:57.045456] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.133 [2024-05-15 15:52:57.045871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-05-15 15:52:57.046028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-05-15 15:52:57.046057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.133 [2024-05-15 15:52:57.046075] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.133 [2024-05-15 15:52:57.046329] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.133 [2024-05-15 15:52:57.046574] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.133 [2024-05-15 15:52:57.046597] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.133 [2024-05-15 15:52:57.046613] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.133 [2024-05-15 15:52:57.050239] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.133 [2024-05-15 15:52:57.059399] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.133 [2024-05-15 15:52:57.059810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-05-15 15:52:57.060002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-05-15 15:52:57.060031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.133 [2024-05-15 15:52:57.060050] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.133 [2024-05-15 15:52:57.060319] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.133 [2024-05-15 15:52:57.060565] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.133 [2024-05-15 15:52:57.060589] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.133 [2024-05-15 15:52:57.060606] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.133 [2024-05-15 15:52:57.064210] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.133 [2024-05-15 15:52:57.073393] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.133 [2024-05-15 15:52:57.073820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-05-15 15:52:57.073984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-05-15 15:52:57.074013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.133 [2024-05-15 15:52:57.074031] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.133 [2024-05-15 15:52:57.074282] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.133 [2024-05-15 15:52:57.074527] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.133 [2024-05-15 15:52:57.074551] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.133 [2024-05-15 15:52:57.074567] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.133 [2024-05-15 15:52:57.078174] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.133 [2024-05-15 15:52:57.087343] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.133 [2024-05-15 15:52:57.087743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-05-15 15:52:57.087929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-05-15 15:52:57.087958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.133 [2024-05-15 15:52:57.087976] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.133 [2024-05-15 15:52:57.088228] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.133 [2024-05-15 15:52:57.088474] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.133 [2024-05-15 15:52:57.088498] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.134 [2024-05-15 15:52:57.088514] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.134 [2024-05-15 15:52:57.092117] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.134 [2024-05-15 15:52:57.101271] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.134 [2024-05-15 15:52:57.101696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-05-15 15:52:57.101937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-05-15 15:52:57.101965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.134 [2024-05-15 15:52:57.101984] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.134 [2024-05-15 15:52:57.102235] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.134 [2024-05-15 15:52:57.102487] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.134 [2024-05-15 15:52:57.102511] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.134 [2024-05-15 15:52:57.102527] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.134 [2024-05-15 15:52:57.106132] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.134 [2024-05-15 15:52:57.115294] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.134 [2024-05-15 15:52:57.115707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-05-15 15:52:57.115887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-05-15 15:52:57.115915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.134 [2024-05-15 15:52:57.115932] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.134 [2024-05-15 15:52:57.116173] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.134 [2024-05-15 15:52:57.116428] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.134 [2024-05-15 15:52:57.116452] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.134 [2024-05-15 15:52:57.116468] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.134 [2024-05-15 15:52:57.120128] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.134 [2024-05-15 15:52:57.129308] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.134 [2024-05-15 15:52:57.129729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-05-15 15:52:57.130020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-05-15 15:52:57.130072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.134 [2024-05-15 15:52:57.130091] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.134 [2024-05-15 15:52:57.130347] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.134 [2024-05-15 15:52:57.130595] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.134 [2024-05-15 15:52:57.130620] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.134 [2024-05-15 15:52:57.130636] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.134 [2024-05-15 15:52:57.134246] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.134 [2024-05-15 15:52:57.143172] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.134 [2024-05-15 15:52:57.143606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-05-15 15:52:57.143732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-05-15 15:52:57.143763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.134 [2024-05-15 15:52:57.143782] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.134 [2024-05-15 15:52:57.144024] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.134 [2024-05-15 15:52:57.144283] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.134 [2024-05-15 15:52:57.144314] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.134 [2024-05-15 15:52:57.144331] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.134 [2024-05-15 15:52:57.147935] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.134 [2024-05-15 15:52:57.157093] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.134 [2024-05-15 15:52:57.157507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-05-15 15:52:57.157756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-05-15 15:52:57.157786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.134 [2024-05-15 15:52:57.157805] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.134 [2024-05-15 15:52:57.158047] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.134 [2024-05-15 15:52:57.158306] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.134 [2024-05-15 15:52:57.158331] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.134 [2024-05-15 15:52:57.158348] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.134 [2024-05-15 15:52:57.161950] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.134 [2024-05-15 15:52:57.171102] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.134 [2024-05-15 15:52:57.171525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-05-15 15:52:57.171731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-05-15 15:52:57.171761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.134 [2024-05-15 15:52:57.171780] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.134 [2024-05-15 15:52:57.172022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.134 [2024-05-15 15:52:57.172280] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.134 [2024-05-15 15:52:57.172305] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.134 [2024-05-15 15:52:57.172322] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.134 [2024-05-15 15:52:57.175928] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.134 [2024-05-15 15:52:57.185079] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.134 [2024-05-15 15:52:57.185504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-05-15 15:52:57.185685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-05-15 15:52:57.185714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.134 [2024-05-15 15:52:57.185733] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.134 [2024-05-15 15:52:57.185975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.134 [2024-05-15 15:52:57.186233] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.134 [2024-05-15 15:52:57.186259] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.134 [2024-05-15 15:52:57.186280] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.134 [2024-05-15 15:52:57.189887] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.134 [2024-05-15 15:52:57.199039] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.134 [2024-05-15 15:52:57.199474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-05-15 15:52:57.199638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-05-15 15:52:57.199665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.134 [2024-05-15 15:52:57.199682] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.134 [2024-05-15 15:52:57.199925] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.134 [2024-05-15 15:52:57.200170] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.134 [2024-05-15 15:52:57.200195] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.134 [2024-05-15 15:52:57.200211] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.134 [2024-05-15 15:52:57.203829] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.134 [2024-05-15 15:52:57.212980] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.134 [2024-05-15 15:52:57.213404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-05-15 15:52:57.213694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-05-15 15:52:57.213750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.134 [2024-05-15 15:52:57.213768] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.135 [2024-05-15 15:52:57.214010] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.135 [2024-05-15 15:52:57.214266] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.135 [2024-05-15 15:52:57.214291] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.135 [2024-05-15 15:52:57.214309] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.135 [2024-05-15 15:52:57.217911] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.135 [2024-05-15 15:52:57.226853] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.135 [2024-05-15 15:52:57.227290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-05-15 15:52:57.227449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-05-15 15:52:57.227479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.135 [2024-05-15 15:52:57.227497] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.135 [2024-05-15 15:52:57.227739] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.135 [2024-05-15 15:52:57.227986] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.135 [2024-05-15 15:52:57.228010] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.135 [2024-05-15 15:52:57.228027] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.135 [2024-05-15 15:52:57.231692] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.395 [2024-05-15 15:52:57.240905] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.395 [2024-05-15 15:52:57.241330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.395 [2024-05-15 15:52:57.241514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.395 [2024-05-15 15:52:57.241542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.395 [2024-05-15 15:52:57.241560] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.395 [2024-05-15 15:52:57.241802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.395 [2024-05-15 15:52:57.242049] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.395 [2024-05-15 15:52:57.242073] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.395 [2024-05-15 15:52:57.242089] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.395 [2024-05-15 15:52:57.245707] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.395 [2024-05-15 15:52:57.254863] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.395 [2024-05-15 15:52:57.255288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.395 [2024-05-15 15:52:57.255446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.395 [2024-05-15 15:52:57.255475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.395 [2024-05-15 15:52:57.255493] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.395 [2024-05-15 15:52:57.255734] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.395 [2024-05-15 15:52:57.255979] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.395 [2024-05-15 15:52:57.256004] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.395 [2024-05-15 15:52:57.256020] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.395 [2024-05-15 15:52:57.259632] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.395 [2024-05-15 15:52:57.268781] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.395 [2024-05-15 15:52:57.269197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.395 [2024-05-15 15:52:57.269373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.395 [2024-05-15 15:52:57.269403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.395 [2024-05-15 15:52:57.269420] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.395 [2024-05-15 15:52:57.269661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.395 [2024-05-15 15:52:57.269907] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.395 [2024-05-15 15:52:57.269932] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.395 [2024-05-15 15:52:57.269949] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.395 [2024-05-15 15:52:57.273558] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1475301 Killed "${NVMF_APP[@]}" "$@" 00:34:44.395 15:52:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:44.395 15:52:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:44.395 15:52:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:44.395 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:44.395 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:44.395 [2024-05-15 15:52:57.282718] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.395 [2024-05-15 15:52:57.283143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.395 [2024-05-15 15:52:57.283317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.395 [2024-05-15 15:52:57.283346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.395 [2024-05-15 15:52:57.283366] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.395 [2024-05-15 15:52:57.283608] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.395 [2024-05-15 15:52:57.283852] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.395 [2024-05-15 15:52:57.283876] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.395 [2024-05-15 15:52:57.283891] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.395 15:52:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1476249 00:34:44.395 15:52:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:44.395 15:52:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1476249 00:34:44.395 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 1476249 ']' 00:34:44.395 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:44.395 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:44.395 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:44.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:44.395 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:44.395 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:44.395 [2024-05-15 15:52:57.287504] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.395 [2024-05-15 15:52:57.296658] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.395 [2024-05-15 15:52:57.297076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.395 [2024-05-15 15:52:57.297213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.395 [2024-05-15 15:52:57.297249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.395 [2024-05-15 15:52:57.297266] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.395 [2024-05-15 15:52:57.297508] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.395 [2024-05-15 15:52:57.297752] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.395 [2024-05-15 15:52:57.297775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.395 [2024-05-15 15:52:57.297792] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.395 [2024-05-15 15:52:57.301400] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.395 [2024-05-15 15:52:57.310550] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.396 [2024-05-15 15:52:57.310964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.311149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.311177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.396 [2024-05-15 15:52:57.311195] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.396 [2024-05-15 15:52:57.311445] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.396 [2024-05-15 15:52:57.311691] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.396 [2024-05-15 15:52:57.311714] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.396 [2024-05-15 15:52:57.311730] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.396 [2024-05-15 15:52:57.315341] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.396 [2024-05-15 15:52:57.324486] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.396 [2024-05-15 15:52:57.324892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.325054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.325082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.396 [2024-05-15 15:52:57.325100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.396 [2024-05-15 15:52:57.325351] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.396 [2024-05-15 15:52:57.325597] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.396 [2024-05-15 15:52:57.325621] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.396 [2024-05-15 15:52:57.325637] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.396 [2024-05-15 15:52:57.329254] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.396 [2024-05-15 15:52:57.333698] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:34:44.396 [2024-05-15 15:52:57.333766] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:44.396 [2024-05-15 15:52:57.338272] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.396 [2024-05-15 15:52:57.338673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.338846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.338871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.396 [2024-05-15 15:52:57.338887] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.396 [2024-05-15 15:52:57.339131] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.396 [2024-05-15 15:52:57.339372] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.396 [2024-05-15 15:52:57.339394] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.396 [2024-05-15 15:52:57.339415] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.396 [2024-05-15 15:52:57.342574] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.396 [2024-05-15 15:52:57.351665] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.396 [2024-05-15 15:52:57.352126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.352277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.352303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.396 [2024-05-15 15:52:57.352318] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.396 [2024-05-15 15:52:57.352563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.396 [2024-05-15 15:52:57.352764] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.396 [2024-05-15 15:52:57.352783] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.396 [2024-05-15 15:52:57.352795] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.396 [2024-05-15 15:52:57.355834] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.396 [2024-05-15 15:52:57.364919] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.396 [2024-05-15 15:52:57.365316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.365467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.365492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.396 [2024-05-15 15:52:57.365508] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.396 [2024-05-15 15:52:57.365762] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.396 [2024-05-15 15:52:57.365963] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.396 [2024-05-15 15:52:57.365981] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.396 [2024-05-15 15:52:57.365994] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.396 [2024-05-15 15:52:57.369016] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.396 EAL: No free 2048 kB hugepages reported on node 1 00:34:44.396 [2024-05-15 15:52:57.378505] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.396 [2024-05-15 15:52:57.378917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.379093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.379119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.396 [2024-05-15 15:52:57.379134] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.396 [2024-05-15 15:52:57.379270] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:44.396 [2024-05-15 15:52:57.379361] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.396 [2024-05-15 15:52:57.379605] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.396 [2024-05-15 15:52:57.379629] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.396 [2024-05-15 15:52:57.379642] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.396 [2024-05-15 15:52:57.383108] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.396 [2024-05-15 15:52:57.392370] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.396 [2024-05-15 15:52:57.392793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.392949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.392975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.396 [2024-05-15 15:52:57.392990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.396 [2024-05-15 15:52:57.393242] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.396 [2024-05-15 15:52:57.393443] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.396 [2024-05-15 15:52:57.393463] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.396 [2024-05-15 15:52:57.393475] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.396 [2024-05-15 15:52:57.396970] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.396 [2024-05-15 15:52:57.406362] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.396 [2024-05-15 15:52:57.406770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.406933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.406958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.396 [2024-05-15 15:52:57.406974] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.396 [2024-05-15 15:52:57.407229] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.396 [2024-05-15 15:52:57.407430] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.396 [2024-05-15 15:52:57.407449] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.396 [2024-05-15 15:52:57.407462] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.396 [2024-05-15 15:52:57.410955] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.396 [2024-05-15 15:52:57.415296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:44.396 [2024-05-15 15:52:57.420357] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.396 [2024-05-15 15:52:57.420835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.420993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.421020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.396 [2024-05-15 15:52:57.421037] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.396 [2024-05-15 15:52:57.421289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.396 [2024-05-15 15:52:57.421499] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.396 [2024-05-15 15:52:57.421534] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.396 [2024-05-15 15:52:57.421553] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.396 [2024-05-15 15:52:57.425058] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.396 [2024-05-15 15:52:57.434298] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.396 [2024-05-15 15:52:57.434794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.434967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.434993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.396 [2024-05-15 15:52:57.435014] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.396 [2024-05-15 15:52:57.435276] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.396 [2024-05-15 15:52:57.435502] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.396 [2024-05-15 15:52:57.435522] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.396 [2024-05-15 15:52:57.435539] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.396 [2024-05-15 15:52:57.438775] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.396 [2024-05-15 15:52:57.448202] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.396 [2024-05-15 15:52:57.448634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.448800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.448826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.396 [2024-05-15 15:52:57.448842] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.396 [2024-05-15 15:52:57.449087] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.396 [2024-05-15 15:52:57.449332] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.396 [2024-05-15 15:52:57.449354] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.396 [2024-05-15 15:52:57.449367] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.396 [2024-05-15 15:52:57.452879] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.396 [2024-05-15 15:52:57.462055] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.396 [2024-05-15 15:52:57.462538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.462695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.462722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.396 [2024-05-15 15:52:57.462740] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.396 [2024-05-15 15:52:57.462989] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.396 [2024-05-15 15:52:57.463275] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.396 [2024-05-15 15:52:57.463296] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.396 [2024-05-15 15:52:57.463312] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.396 [2024-05-15 15:52:57.466858] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.396 [2024-05-15 15:52:57.476071] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.396 [2024-05-15 15:52:57.476767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.476943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.476970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.396 [2024-05-15 15:52:57.476990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.396 [2024-05-15 15:52:57.477270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.396 [2024-05-15 15:52:57.477475] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.396 [2024-05-15 15:52:57.477495] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.396 [2024-05-15 15:52:57.477511] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.396 [2024-05-15 15:52:57.480979] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.396 [2024-05-15 15:52:57.489979] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.396 [2024-05-15 15:52:57.490456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.490579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.396 [2024-05-15 15:52:57.490606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.396 [2024-05-15 15:52:57.490622] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.396 [2024-05-15 15:52:57.490854] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.396 [2024-05-15 15:52:57.491104] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.396 [2024-05-15 15:52:57.491126] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.396 [2024-05-15 15:52:57.491141] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.695 [2024-05-15 15:52:57.494833] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.695 [2024-05-15 15:52:57.504082] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.695 [2024-05-15 15:52:57.504485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.695 [2024-05-15 15:52:57.504672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.695 [2024-05-15 15:52:57.504699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.695 [2024-05-15 15:52:57.504716] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.695 [2024-05-15 15:52:57.504949] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.695 [2024-05-15 15:52:57.505184] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.695 [2024-05-15 15:52:57.505211] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.695 [2024-05-15 15:52:57.505254] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.695 [2024-05-15 15:52:57.505360] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:44.695 [2024-05-15 15:52:57.505407] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:44.695 [2024-05-15 15:52:57.505433] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:44.695 [2024-05-15 15:52:57.505454] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:44.695 [2024-05-15 15:52:57.505473] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:44.695 [2024-05-15 15:52:57.505586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:44.695 [2024-05-15 15:52:57.505659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:44.695 [2024-05-15 15:52:57.505650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:44.695 [2024-05-15 15:52:57.508675] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.695 [2024-05-15 15:52:57.517682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.695 [2024-05-15 15:52:57.518224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.696 [2024-05-15 15:52:57.518424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.696 [2024-05-15 15:52:57.518451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.696 [2024-05-15 15:52:57.518473] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.696 [2024-05-15 15:52:57.518712] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.696 [2024-05-15 15:52:57.518932] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.696 [2024-05-15 15:52:57.518954] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.696 [2024-05-15 15:52:57.518971] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.696 [2024-05-15 15:52:57.522131] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.696 [2024-05-15 15:52:57.531186] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.696 [2024-05-15 15:52:57.531737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.696 [2024-05-15 15:52:57.531908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.696 [2024-05-15 15:52:57.531935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.696 [2024-05-15 15:52:57.531956] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.696 [2024-05-15 15:52:57.532198] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.696 [2024-05-15 15:52:57.532426] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.696 [2024-05-15 15:52:57.532448] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.696 [2024-05-15 15:52:57.532466] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.696 [2024-05-15 15:52:57.535663] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.696 [2024-05-15 15:52:57.544829] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.696 [2024-05-15 15:52:57.545390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.696 [2024-05-15 15:52:57.545538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.696 [2024-05-15 15:52:57.545565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.696 [2024-05-15 15:52:57.545586] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.696 [2024-05-15 15:52:57.545838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.696 [2024-05-15 15:52:57.546058] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.696 [2024-05-15 15:52:57.546080] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.696 [2024-05-15 15:52:57.546097] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.696 [2024-05-15 15:52:57.549275] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.696 [2024-05-15 15:52:57.558464] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.696 [2024-05-15 15:52:57.558935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.696 [2024-05-15 15:52:57.559071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.696 [2024-05-15 15:52:57.559098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.696 [2024-05-15 15:52:57.559118] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.696 [2024-05-15 15:52:57.559367] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.696 [2024-05-15 15:52:57.559605] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.696 [2024-05-15 15:52:57.559627] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.696 [2024-05-15 15:52:57.559645] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.696 [2024-05-15 15:52:57.563024] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.696 [2024-05-15 15:52:57.572079] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.696 [2024-05-15 15:52:57.572575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.696 [2024-05-15 15:52:57.572723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.696 [2024-05-15 15:52:57.572750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.696 [2024-05-15 15:52:57.572772] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.696 [2024-05-15 15:52:57.573014] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.696 [2024-05-15 15:52:57.573244] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.696 [2024-05-15 15:52:57.573265] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.696 [2024-05-15 15:52:57.573283] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.696 [2024-05-15 15:52:57.576481] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.696 [2024-05-15 15:52:57.585815] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.696 [2024-05-15 15:52:57.586348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.696 [2024-05-15 15:52:57.586513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.696 [2024-05-15 15:52:57.586539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.696 [2024-05-15 15:52:57.586559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.696 [2024-05-15 15:52:57.586798] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.696 [2024-05-15 15:52:57.587027] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.696 [2024-05-15 15:52:57.587048] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.696 [2024-05-15 15:52:57.587064] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.696 [2024-05-15 15:52:57.590323] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.696 [2024-05-15 15:52:57.599426] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.696 [2024-05-15 15:52:57.599796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.696 [2024-05-15 15:52:57.599961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.696 [2024-05-15 15:52:57.599986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.696 [2024-05-15 15:52:57.600002] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.696 [2024-05-15 15:52:57.600227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.696 [2024-05-15 15:52:57.600448] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.696 [2024-05-15 15:52:57.600469] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.696 [2024-05-15 15:52:57.600483] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.696 [2024-05-15 15:52:57.603776] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.696 [2024-05-15 15:52:57.613046] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.696 [2024-05-15 15:52:57.613415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.696 [2024-05-15 15:52:57.613546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.696 [2024-05-15 15:52:57.613572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.696 [2024-05-15 15:52:57.613588] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.696 [2024-05-15 15:52:57.613805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.696 [2024-05-15 15:52:57.614025] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.696 [2024-05-15 15:52:57.614045] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.696 [2024-05-15 15:52:57.614059] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.696 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:44.696 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:34:44.696 15:52:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:44.696 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:44.696 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:44.696 [2024-05-15 15:52:57.617332] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.696 [2024-05-15 15:52:57.626768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.696 [2024-05-15 15:52:57.627147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.696 [2024-05-15 15:52:57.627283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.696 [2024-05-15 15:52:57.627310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.696 [2024-05-15 15:52:57.627331] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.696 [2024-05-15 15:52:57.627564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.696 [2024-05-15 15:52:57.627778] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.696 [2024-05-15 15:52:57.627798] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.696 [2024-05-15 15:52:57.627812] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.696 [2024-05-15 15:52:57.631075] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.696 [2024-05-15 15:52:57.640320] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.696 [2024-05-15 15:52:57.640701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.696 [2024-05-15 15:52:57.640832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.697 [2024-05-15 15:52:57.640857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.697 [2024-05-15 15:52:57.640873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.697 [2024-05-15 15:52:57.641089] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.697 [2024-05-15 15:52:57.641347] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.697 [2024-05-15 15:52:57.641369] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.697 [2024-05-15 15:52:57.641383] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.697 15:52:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:44.697 15:52:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:44.697 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.697 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:44.697 [2024-05-15 15:52:57.644741] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.697 [2024-05-15 15:52:57.648449] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:44.697 [2024-05-15 15:52:57.654020] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.697 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.697 15:52:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:44.697 [2024-05-15 15:52:57.654361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.697 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.697 [2024-05-15 15:52:57.654508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.697 [2024-05-15 15:52:57.654534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.697 [2024-05-15 15:52:57.654550] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.697 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:44.697 [2024-05-15 15:52:57.654767] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.697 [2024-05-15 15:52:57.654988] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.697 [2024-05-15 15:52:57.655008] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.697 [2024-05-15 15:52:57.655027] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.697 [2024-05-15 15:52:57.658348] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.697 [2024-05-15 15:52:57.667617] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.697 [2024-05-15 15:52:57.667956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.697 [2024-05-15 15:52:57.668119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.697 [2024-05-15 15:52:57.668145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.697 [2024-05-15 15:52:57.668161] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.697 [2024-05-15 15:52:57.668386] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.697 [2024-05-15 15:52:57.668633] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.697 [2024-05-15 15:52:57.668653] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.697 [2024-05-15 15:52:57.668666] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.697 [2024-05-15 15:52:57.671869] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.697 [2024-05-15 15:52:57.681447] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.697 [2024-05-15 15:52:57.681812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.697 [2024-05-15 15:52:57.681968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.697 [2024-05-15 15:52:57.681993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.697 [2024-05-15 15:52:57.682011] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.697 [2024-05-15 15:52:57.682238] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.697 [2024-05-15 15:52:57.682460] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.697 [2024-05-15 15:52:57.682481] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.697 [2024-05-15 15:52:57.682511] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.697 [2024-05-15 15:52:57.685791] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.697 [2024-05-15 15:52:57.695110] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.697 [2024-05-15 15:52:57.695694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.697 [2024-05-15 15:52:57.695864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.697 [2024-05-15 15:52:57.695891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.697 [2024-05-15 15:52:57.695911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.697 [2024-05-15 15:52:57.696151] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.697 [2024-05-15 15:52:57.696401] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.697 [2024-05-15 15:52:57.696423] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.697 [2024-05-15 15:52:57.696440] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.697 Malloc0 00:34:44.697 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.697 15:52:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:44.697 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.697 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:44.697 [2024-05-15 15:52:57.699761] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.697 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.697 15:52:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:44.697 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.697 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:44.697 [2024-05-15 15:52:57.708720] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.697 [2024-05-15 15:52:57.709128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.697 [2024-05-15 15:52:57.709261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.697 [2024-05-15 15:52:57.709287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57540 with addr=10.0.0.2, port=4420 00:34:44.697 [2024-05-15 15:52:57.709303] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57540 is same with the state(5) to be set 00:34:44.697 [2024-05-15 15:52:57.709521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d57540 (9): Bad file descriptor 00:34:44.697 [2024-05-15 15:52:57.709750] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.697 [2024-05-15 15:52:57.709770] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.697 [2024-05-15 15:52:57.709784] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.697 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.697 15:52:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:44.697 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.697 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:44.697 [2024-05-15 15:52:57.713120] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:44.697 [2024-05-15 15:52:57.716269] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:34:44.697 [2024-05-15 15:52:57.716548] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:44.697 15:52:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.697 15:52:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1475581 00:34:44.697 [2024-05-15 15:52:57.722427] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.697 [2024-05-15 15:52:57.755122] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:54.668 00:34:54.668 Latency(us) 00:34:54.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:54.668 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:54.668 Verification LBA range: start 0x0 length 0x4000 00:34:54.668 Nvme1n1 : 15.01 6578.74 25.70 8368.06 0.00 8538.15 861.68 21068.61 00:34:54.668 =================================================================================================================== 00:34:54.668 Total : 6578.74 25.70 8368.06 0.00 8538.15 861.68 21068.61 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:54.668 rmmod nvme_tcp 00:34:54.668 rmmod nvme_fabrics 00:34:54.668 rmmod nvme_keyring 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1476249 ']' 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1476249 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 1476249 ']' 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 1476249 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1476249 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1476249' 00:34:54.668 killing process with pid 1476249 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 1476249 00:34:54.668 [2024-05-15 15:53:07.121162] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 1476249 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:54.668 15:53:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:56.568 15:53:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:56.568 00:34:56.568 real 0m22.786s 00:34:56.568 user 0m59.955s 00:34:56.568 sys 0m4.562s 00:34:56.568 15:53:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:56.568 15:53:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:56.568 ************************************ 00:34:56.568 END TEST nvmf_bdevperf 00:34:56.568 ************************************ 00:34:56.568 15:53:09 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:56.568 15:53:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:56.568 15:53:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:56.568 15:53:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:56.568 ************************************ 00:34:56.568 START TEST nvmf_target_disconnect 00:34:56.568 ************************************ 00:34:56.568 15:53:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:56.568 * Looking for test storage... 00:34:56.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:56.568 15:53:09 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:56.568 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:56.568 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:56.568 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:56.568 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:56.568 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:56.568 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:56.568 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:56.568 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:56.568 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:56.568 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:56.568 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:56.568 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:56.568 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:56.568 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:56.568 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:56.568 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:56.568 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:56.568 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:56.568 15:53:09 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:56.568 15:53:09 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:56.568 15:53:09 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:56.568 15:53:09 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:56.569 15:53:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:59.098 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:59.098 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:59.098 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:59.098 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:59.098 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:59.098 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:59.098 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:59.098 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:59.098 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:59.098 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:59.098 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:59.098 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:59.098 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:59.098 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:59.098 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:59.098 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:59.098 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:59.098 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:59.099 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:59.099 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:59.099 Found net devices under 0000:09:00.0: cvl_0_0 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:59.099 Found net devices under 0000:09:00.1: cvl_0_1 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:59.099 15:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:59.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:59.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:34:59.099 00:34:59.099 --- 10.0.0.2 ping statistics --- 00:34:59.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.099 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:59.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:59.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:34:59.099 00:34:59.099 --- 10.0.0.1 ping statistics --- 00:34:59.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.099 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:59.099 ************************************ 00:34:59.099 START TEST nvmf_target_disconnect_tc1 00:34:59.099 ************************************ 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:59.099 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:59.100 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:59.100 EAL: No free 2048 kB hugepages reported on node 1 00:34:59.358 [2024-05-15 15:53:12.208296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:59.358 [2024-05-15 15:53:12.208540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:59.358 [2024-05-15 15:53:12.208571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef9d20 with addr=10.0.0.2, port=4420 00:34:59.358 [2024-05-15 15:53:12.208611] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:59.358 [2024-05-15 15:53:12.208638] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:59.358 [2024-05-15 15:53:12.208654] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:59.358 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:59.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:59.358 Initializing NVMe Controllers 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:59.358 00:34:59.358 real 0m0.104s 00:34:59.358 user 0m0.044s 00:34:59.358 sys 0m0.059s 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:59.358 ************************************ 00:34:59.358 END TEST nvmf_target_disconnect_tc1 00:34:59.358 ************************************ 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:59.358 ************************************ 00:34:59.358 START TEST nvmf_target_disconnect_tc2 00:34:59.358 ************************************ 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1479684 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1479684 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1479684 ']' 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:59.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:59.358 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:59.358 [2024-05-15 15:53:12.324488] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:34:59.358 [2024-05-15 15:53:12.324585] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:59.358 EAL: No free 2048 kB hugepages reported on node 1 00:34:59.358 [2024-05-15 15:53:12.370077] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:59.359 [2024-05-15 15:53:12.401461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:59.617 [2024-05-15 15:53:12.490258] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:59.617 [2024-05-15 15:53:12.490332] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:59.617 [2024-05-15 15:53:12.490347] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:59.617 [2024-05-15 15:53:12.490359] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:59.617 [2024-05-15 15:53:12.490370] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:59.617 [2024-05-15 15:53:12.490465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:59.617 [2024-05-15 15:53:12.490516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:59.617 [2024-05-15 15:53:12.490538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:59.617 [2024-05-15 15:53:12.490541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:59.617 Malloc0 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:59.617 [2024-05-15 15:53:12.668258] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:59.617 [2024-05-15 15:53:12.696274] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:34:59.617 [2024-05-15 15:53:12.696636] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1479827 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:59.617 15:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:59.876 EAL: No free 2048 kB hugepages reported on node 1 00:35:01.786 15:53:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1479684 00:35:01.786 15:53:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 [2024-05-15 15:53:14.726404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 [2024-05-15 15:53:14.726699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Write completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 Read completed with error (sct=0, sc=8) 00:35:01.786 starting I/O failed 00:35:01.786 [2024-05-15 15:53:14.727056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:01.786 [2024-05-15 15:53:14.727286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.786 [2024-05-15 15:53:14.727444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.786 [2024-05-15 15:53:14.727480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.786 qpair failed and we were unable to recover it. 00:35:01.786 [2024-05-15 15:53:14.727606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.786 [2024-05-15 15:53:14.727777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.786 [2024-05-15 15:53:14.727802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.786 qpair failed and we were unable to recover it. 00:35:01.786 [2024-05-15 15:53:14.727907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.786 [2024-05-15 15:53:14.728046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.786 [2024-05-15 15:53:14.728071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.786 qpair failed and we were unable to recover it. 00:35:01.786 [2024-05-15 15:53:14.728174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.728308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.728333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.728575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.728841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.728893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.729073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.729243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.729283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.729427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.729666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.729717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.729925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.730098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.730125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.730269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.730394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.730420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.730568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.730743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.730770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.730966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.731132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.731159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.731306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.731427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.731454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.731598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.731736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.731765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.731913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.732130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.732156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.732404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.732579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.732612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.732804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.733021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.733048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.733232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.733388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.733432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.733585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.733739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.733767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.733910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.734057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.734084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.734204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.734357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.734384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.734523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.734652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.734698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.734887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.735020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.735046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.735201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.735321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.735347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.735454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.735707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.735733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.736095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.736264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.736290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.736415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.736654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.736680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.736844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.737019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.737046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.737184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.737422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.737451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.737664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.737814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.737858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.738002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.738228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.738267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.738405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.738636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.738661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.738829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.738954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.738981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.787 qpair failed and we were unable to recover it. 00:35:01.787 [2024-05-15 15:53:14.739100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.739244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.787 [2024-05-15 15:53:14.739271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.739435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.739701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.739757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.739903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.740048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.740074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.740197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.740353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.740382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.740522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.740651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.740678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.740833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.740998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.741025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.741143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.741285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.741313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.741461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.741665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.741692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.741871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.742071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.742098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.742260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.742392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.742418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.742533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.742676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.742704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.742856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.742964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.742991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.743139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.743268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.743317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.743436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.743576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.743603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.743731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.743950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.743976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.744179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.744345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.744390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.744531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.744662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.744688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.744888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.745029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.745055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.745226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.745345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.745372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.745482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.745631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.745656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.745784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.745886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.745913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.746076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.746195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.746238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.746384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.746540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.746584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.746809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.746974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.747001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.747147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.747279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.747326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.747497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.747737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.747767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.747969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.748132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.748158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.748269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.748405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.748449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.748651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.748823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.748866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.749034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.749173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.749200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.788 qpair failed and we were unable to recover it. 00:35:01.788 [2024-05-15 15:53:14.749356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.788 [2024-05-15 15:53:14.749501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.749528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.749678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.749893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.749920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.750082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.750257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.750285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.750513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.750664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.750691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.750873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.751013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.751040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.751209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.751359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.751387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.751533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.751672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.751698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.751865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.751996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.752023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.752163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.752328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.752356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.752515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.752753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.752808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.752960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.753095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.753121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.753251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.753461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.753510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.753698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.753856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.753899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.754038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.754182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.754208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.754365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.754499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.754529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.754720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.754837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.754863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.755061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.755239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.755266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.755410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.755586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.755615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.755837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.755998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.756025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.756162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.756280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.756308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.756471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.756604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.756648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.756810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.756993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.757020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.757242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.757383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.757409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.757552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.757680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.757724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.757901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.758039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.758067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.758203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.758380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.758411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.758583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.758858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.758913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.759093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.759234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.759261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.759429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.759568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.759594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.759736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.759853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.789 [2024-05-15 15:53:14.759881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.789 qpair failed and we were unable to recover it. 00:35:01.789 [2024-05-15 15:53:14.760021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.760160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.760187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.790 qpair failed and we were unable to recover it. 00:35:01.790 [2024-05-15 15:53:14.760339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.760489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.760516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.790 qpair failed and we were unable to recover it. 00:35:01.790 [2024-05-15 15:53:14.760700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.760884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.760911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.790 qpair failed and we were unable to recover it. 00:35:01.790 [2024-05-15 15:53:14.761051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.761170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.761198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.790 qpair failed and we were unable to recover it. 00:35:01.790 [2024-05-15 15:53:14.761370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.761517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.761562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.790 qpair failed and we were unable to recover it. 00:35:01.790 [2024-05-15 15:53:14.761708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.761928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.761955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.790 qpair failed and we were unable to recover it. 00:35:01.790 [2024-05-15 15:53:14.762088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.762258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.762285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.790 qpair failed and we were unable to recover it. 00:35:01.790 [2024-05-15 15:53:14.762475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.762662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.762688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.790 qpair failed and we were unable to recover it. 00:35:01.790 [2024-05-15 15:53:14.762831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.763054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.763081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.790 qpair failed and we were unable to recover it. 00:35:01.790 [2024-05-15 15:53:14.763223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.763380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.763424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.790 qpair failed and we were unable to recover it. 00:35:01.790 [2024-05-15 15:53:14.763558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.763787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.763848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.790 qpair failed and we were unable to recover it. 00:35:01.790 [2024-05-15 15:53:14.764011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.764154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.764180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.790 qpair failed and we were unable to recover it. 00:35:01.790 [2024-05-15 15:53:14.764319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.764499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.764545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.790 qpair failed and we were unable to recover it. 00:35:01.790 [2024-05-15 15:53:14.764730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.764908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.764957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.790 qpair failed and we were unable to recover it. 00:35:01.790 [2024-05-15 15:53:14.765124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.765292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.765320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.790 qpair failed and we were unable to recover it. 00:35:01.790 [2024-05-15 15:53:14.765466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.765611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.765638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.790 qpair failed and we were unable to recover it. 00:35:01.790 [2024-05-15 15:53:14.765803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.765944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.765972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.790 qpair failed and we were unable to recover it. 00:35:01.790 [2024-05-15 15:53:14.766112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.766255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.790 [2024-05-15 15:53:14.766282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.790 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.766422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.766539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.766567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.766724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.766908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.766935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.767055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.767223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.767250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.767365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.767477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.767504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.767639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.767753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.767779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.767918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.768050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.768076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.768185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.768338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.768366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.768592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.768781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.768809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.769027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.769146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.769174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.769348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.769529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.769574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.769748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.769867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.769894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.770043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.770181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.770208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.770380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.770543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.770573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.770749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.770904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.770931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.771038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.771181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.771208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.771383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.771517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.771543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.771713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.771846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.771896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.772071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.772235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.772265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.772407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.772566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.772608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.772787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.772924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.772950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.773098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.773212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.773264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.773441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.773570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.773596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.773728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.773891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.773917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.774057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.774205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.774251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.774415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.774557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.774583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.774717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.774858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.791 [2024-05-15 15:53:14.774885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.791 qpair failed and we were unable to recover it. 00:35:01.791 [2024-05-15 15:53:14.775033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.775174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.775205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.792 qpair failed and we were unable to recover it. 00:35:01.792 [2024-05-15 15:53:14.775359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.775501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.775527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.792 qpair failed and we were unable to recover it. 00:35:01.792 [2024-05-15 15:53:14.775666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.775834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.775878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.792 qpair failed and we were unable to recover it. 00:35:01.792 [2024-05-15 15:53:14.776050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.776173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.776201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.792 qpair failed and we were unable to recover it. 00:35:01.792 [2024-05-15 15:53:14.776377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.776583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.776626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.792 qpair failed and we were unable to recover it. 00:35:01.792 [2024-05-15 15:53:14.776815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.776948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.776973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.792 qpair failed and we were unable to recover it. 00:35:01.792 [2024-05-15 15:53:14.777113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.777258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.777286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.792 qpair failed and we were unable to recover it. 00:35:01.792 [2024-05-15 15:53:14.777429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.777547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.777573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.792 qpair failed and we were unable to recover it. 00:35:01.792 [2024-05-15 15:53:14.777691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.777833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.777859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.792 qpair failed and we were unable to recover it. 00:35:01.792 [2024-05-15 15:53:14.778079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.778223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.778250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.792 qpair failed and we were unable to recover it. 00:35:01.792 [2024-05-15 15:53:14.778394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.778551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.778599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.792 qpair failed and we were unable to recover it. 00:35:01.792 [2024-05-15 15:53:14.778761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.778918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.778945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.792 qpair failed and we were unable to recover it. 00:35:01.792 [2024-05-15 15:53:14.779088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.779232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.779259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.792 qpair failed and we were unable to recover it. 00:35:01.792 [2024-05-15 15:53:14.779386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.779539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.779584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.792 qpair failed and we were unable to recover it. 00:35:01.792 [2024-05-15 15:53:14.779803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.779974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.780000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.792 qpair failed and we were unable to recover it. 00:35:01.792 [2024-05-15 15:53:14.780108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.780227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.780254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.792 qpair failed and we were unable to recover it. 00:35:01.792 [2024-05-15 15:53:14.780393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.780558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.780584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.792 qpair failed and we were unable to recover it. 00:35:01.792 [2024-05-15 15:53:14.780728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.780870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.780897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.792 qpair failed and we were unable to recover it. 00:35:01.792 [2024-05-15 15:53:14.781116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.781309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.781354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.792 qpair failed and we were unable to recover it. 00:35:01.792 [2024-05-15 15:53:14.781546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.781705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.781741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.792 qpair failed and we were unable to recover it. 00:35:01.792 [2024-05-15 15:53:14.781905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.782039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.782070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.792 qpair failed and we were unable to recover it. 00:35:01.792 [2024-05-15 15:53:14.782189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.782338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.782365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.792 qpair failed and we were unable to recover it. 00:35:01.792 [2024-05-15 15:53:14.782507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.782676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.782718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.792 qpair failed and we were unable to recover it. 00:35:01.792 [2024-05-15 15:53:14.782938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.783077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.783103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.792 qpair failed and we were unable to recover it. 00:35:01.792 [2024-05-15 15:53:14.783237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.792 [2024-05-15 15:53:14.783396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.783441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.783641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.783775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.783801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.783935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.784045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.784071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.784210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.784325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.784353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.784536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.784671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.784699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.784843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.784959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.784984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.785097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.785267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.785292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.785429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.785552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.785576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.785680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.785813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.785838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.785978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.786112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.786137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.786274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.786392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.786418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.786558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.786702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.786730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.786855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.787000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.787027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.787162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.787308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.787335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.787482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.787652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.787678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.787811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.787986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.788013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.788156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.788319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.788364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.788506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.788638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.788664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.788832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.788943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.788970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.789105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.789270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.789301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.789460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.789632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.789678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.789795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.789913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.789942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.790062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.790200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.790248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.790400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.790556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.790600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.790800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.790940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.790966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.793 [2024-05-15 15:53:14.791133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.791290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.793 [2024-05-15 15:53:14.791334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.793 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.791496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.791673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.791717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.791866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.792037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.792063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.792197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.792367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.792412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.792586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.792750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.792777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.792945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.793058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.793086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.793231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.793346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.793373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.793511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.793698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.793724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.793845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.793999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.794025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.794193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.794330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.794357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.794513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.794692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.794735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.794870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.795013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.795039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.795206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.795402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.795446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.795603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.795738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.795782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.795925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.796073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.796099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.796280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.796486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.796529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.796671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.796859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.796886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.797045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.797158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.797184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.797343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.797482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.797509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.797654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.797821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.797847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.797990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.798156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.798183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.798360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.798477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.798503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.798637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.798841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.798882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.799037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.799200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.799235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.799393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.799608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.799634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.799798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.799940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.799983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.800101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.800209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.800242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.794 qpair failed and we were unable to recover it. 00:35:01.794 [2024-05-15 15:53:14.800411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.794 [2024-05-15 15:53:14.800556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.800583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.795 qpair failed and we were unable to recover it. 00:35:01.795 [2024-05-15 15:53:14.800717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.800896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.800922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.795 qpair failed and we were unable to recover it. 00:35:01.795 [2024-05-15 15:53:14.801035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.801199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.801237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.795 qpair failed and we were unable to recover it. 00:35:01.795 [2024-05-15 15:53:14.801377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.801614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.801640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.795 qpair failed and we were unable to recover it. 00:35:01.795 [2024-05-15 15:53:14.801796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.801948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.801973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.795 qpair failed and we were unable to recover it. 00:35:01.795 [2024-05-15 15:53:14.802091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.802212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.802247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.795 qpair failed and we were unable to recover it. 00:35:01.795 [2024-05-15 15:53:14.802406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.802582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.802626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.795 qpair failed and we were unable to recover it. 00:35:01.795 [2024-05-15 15:53:14.802759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.802944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.802970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.795 qpair failed and we were unable to recover it. 00:35:01.795 [2024-05-15 15:53:14.803089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.803227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.803255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.795 qpair failed and we were unable to recover it. 00:35:01.795 [2024-05-15 15:53:14.803452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.803636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.803682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.795 qpair failed and we were unable to recover it. 00:35:01.795 [2024-05-15 15:53:14.803816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.803956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.803983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.795 qpair failed and we were unable to recover it. 00:35:01.795 [2024-05-15 15:53:14.804148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.804270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.804298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.795 qpair failed and we were unable to recover it. 00:35:01.795 [2024-05-15 15:53:14.804462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.804613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.804659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.795 qpair failed and we were unable to recover it. 00:35:01.795 [2024-05-15 15:53:14.804832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.804961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.804987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.795 qpair failed and we were unable to recover it. 00:35:01.795 [2024-05-15 15:53:14.805128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.805295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.805322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.795 qpair failed and we were unable to recover it. 00:35:01.795 [2024-05-15 15:53:14.805472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.805606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.805632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.795 qpair failed and we were unable to recover it. 00:35:01.795 [2024-05-15 15:53:14.805741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.805906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.805932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.795 qpair failed and we were unable to recover it. 00:35:01.795 [2024-05-15 15:53:14.806054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.806174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.806200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.795 qpair failed and we were unable to recover it. 00:35:01.795 [2024-05-15 15:53:14.806348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.806519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.806545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.795 qpair failed and we were unable to recover it. 00:35:01.795 [2024-05-15 15:53:14.806685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.806846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.806872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.795 qpair failed and we were unable to recover it. 00:35:01.795 [2024-05-15 15:53:14.807035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.807171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.795 [2024-05-15 15:53:14.807197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.807373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.807529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.807573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.807702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.807859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.807885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.808023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.808135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.808161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.808304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.808442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.808486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.808660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.808827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.808855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.808975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.809118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.809147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.809323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.809508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.809552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.809754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.809896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.809924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.810074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.810208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.810241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.810381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.810517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.810543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.810663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.810802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.810829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.810991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.811106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.811132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.811286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.811467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.811510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.811670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.811809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.811844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.811993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.812137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.812163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.812305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.812471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.812515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.812654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.812814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.812841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.812989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.813127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.813153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.813346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.813526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.813555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.813692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.813858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.813885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.814045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.814189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.814233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.814400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.814552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.814577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.814697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.814804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.814830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.814998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.815138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.796 [2024-05-15 15:53:14.815165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.796 qpair failed and we were unable to recover it. 00:35:01.796 [2024-05-15 15:53:14.815297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.815453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.815479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.815647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.815812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.815838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.815953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.816096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.816123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.816292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.816409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.816436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.816575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.816680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.816706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.816844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.817007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.817033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.817199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.817351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.817378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.817524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.817661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.817688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.817856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.817987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.818013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.818128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.818296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.818322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.818475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.818614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.818640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.818779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.818917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.818943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.819054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.819204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.819236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.819353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.819540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.819583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.819722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.819901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.819928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.820072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.820197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.820232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.820396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.820584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.820627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.820800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.820910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.820938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.821088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.821256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.821285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.821452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.821590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.821617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.821759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.821897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.821924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.822068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.822238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.822266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.822403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.822586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.822631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.822773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.822903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.822930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.823085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.823229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.823258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.823422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.823574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.797 [2024-05-15 15:53:14.823617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.797 qpair failed and we were unable to recover it. 00:35:01.797 [2024-05-15 15:53:14.823803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.823988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.824014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.824179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.824323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.824367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.824525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.824797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.824845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.824997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.825165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.825190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.825358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.825592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.825645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.825786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.825931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.825981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.826144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.826332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.826376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.826543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.826749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.826793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.826976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.827129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.827154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.827312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.827519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.827563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.827753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.827906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.827935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.828090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.828259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.828285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.828424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.828588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.828614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.828780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.828943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.828969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.829102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.829213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.829256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.829410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.829587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.829628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.829794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.829920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.829946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.830090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.830244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.830272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.830386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.830530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.830557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.830682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.830879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.830905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.831014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.831126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.831152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.831290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.831451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.831495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.831648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.831845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.831872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.832034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.832204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.832237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.832372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.832584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.832631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.832759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.832968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.833010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.798 [2024-05-15 15:53:14.833155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.833321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.798 [2024-05-15 15:53:14.833365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.798 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.833529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.833702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.833745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.833934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.834048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.834074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.834190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.834304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.834330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.834479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.834601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.834629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.834790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.834932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.834958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.835099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.835244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.835271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.835412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.835567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.835616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.835783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.835924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.835957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.836098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.836269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.836296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.836439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.836576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.836602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.836714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.836854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.836879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.837013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.837180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.837206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.837367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.837513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.837539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.837681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.837824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.837849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.837959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.838123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.838149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.838321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.838492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.838536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.838723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.838966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.839021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.839143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.839297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.839342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.839489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.839665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.839710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.839824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.839967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.839993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.840130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.840279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.840308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.840510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.840685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.840728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.840867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.841030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.841055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.841223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.841369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.841395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.841561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.841692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.841718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.841859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.841971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.841997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.799 qpair failed and we were unable to recover it. 00:35:01.799 [2024-05-15 15:53:14.842139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.799 [2024-05-15 15:53:14.842238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.842265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.800 qpair failed and we were unable to recover it. 00:35:01.800 [2024-05-15 15:53:14.842406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.842557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.842599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.800 qpair failed and we were unable to recover it. 00:35:01.800 [2024-05-15 15:53:14.842767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.842948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.842975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.800 qpair failed and we were unable to recover it. 00:35:01.800 [2024-05-15 15:53:14.843112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.843252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.843278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.800 qpair failed and we were unable to recover it. 00:35:01.800 [2024-05-15 15:53:14.843417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.843551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.843577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.800 qpair failed and we were unable to recover it. 00:35:01.800 [2024-05-15 15:53:14.843736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.843936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.843963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.800 qpair failed and we were unable to recover it. 00:35:01.800 [2024-05-15 15:53:14.844103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.844213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.844246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.800 qpair failed and we were unable to recover it. 00:35:01.800 [2024-05-15 15:53:14.844411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.844587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.844633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.800 qpair failed and we were unable to recover it. 00:35:01.800 [2024-05-15 15:53:14.844755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.844932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.844958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.800 qpair failed and we were unable to recover it. 00:35:01.800 [2024-05-15 15:53:14.845102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.845247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.845276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.800 qpair failed and we were unable to recover it. 00:35:01.800 [2024-05-15 15:53:14.845437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.845636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.845679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.800 qpair failed and we were unable to recover it. 00:35:01.800 [2024-05-15 15:53:14.845797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.845964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.845990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.800 qpair failed and we were unable to recover it. 00:35:01.800 [2024-05-15 15:53:14.846135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.846289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.846334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.800 qpair failed and we were unable to recover it. 00:35:01.800 [2024-05-15 15:53:14.846513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.846770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.846824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.800 qpair failed and we were unable to recover it. 00:35:01.800 [2024-05-15 15:53:14.846968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.847080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.847106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.800 qpair failed and we were unable to recover it. 00:35:01.800 [2024-05-15 15:53:14.847263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.847462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.847504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.800 qpair failed and we were unable to recover it. 00:35:01.800 [2024-05-15 15:53:14.847675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.847844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.847870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.800 qpair failed and we were unable to recover it. 00:35:01.800 [2024-05-15 15:53:14.848045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.848209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.848242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.800 qpair failed and we were unable to recover it. 00:35:01.800 [2024-05-15 15:53:14.848404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.848576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.848602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.800 qpair failed and we were unable to recover it. 00:35:01.800 [2024-05-15 15:53:14.848710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.848850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.848877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.800 qpair failed and we were unable to recover it. 00:35:01.800 [2024-05-15 15:53:14.849014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.849154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.849180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.800 qpair failed and we were unable to recover it. 00:35:01.800 [2024-05-15 15:53:14.849381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.849578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.849614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.800 qpair failed and we were unable to recover it. 00:35:01.800 [2024-05-15 15:53:14.849826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.849958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.800 [2024-05-15 15:53:14.849984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.801 qpair failed and we were unable to recover it. 00:35:01.801 [2024-05-15 15:53:14.850121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.850268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.850295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.801 qpair failed and we were unable to recover it. 00:35:01.801 [2024-05-15 15:53:14.850493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.850698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.850741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.801 qpair failed and we were unable to recover it. 00:35:01.801 [2024-05-15 15:53:14.850877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.851035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.851061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.801 qpair failed and we were unable to recover it. 00:35:01.801 [2024-05-15 15:53:14.851182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.851323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.851367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.801 qpair failed and we were unable to recover it. 00:35:01.801 [2024-05-15 15:53:14.851492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.851669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.851695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.801 qpair failed and we were unable to recover it. 00:35:01.801 [2024-05-15 15:53:14.851834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.851947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.851974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.801 qpair failed and we were unable to recover it. 00:35:01.801 [2024-05-15 15:53:14.852146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.852272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.852300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.801 qpair failed and we were unable to recover it. 00:35:01.801 [2024-05-15 15:53:14.852458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.852653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.852719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.801 qpair failed and we were unable to recover it. 00:35:01.801 [2024-05-15 15:53:14.852872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.853019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.853045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.801 qpair failed and we were unable to recover it. 00:35:01.801 [2024-05-15 15:53:14.853164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.853289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.853317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.801 qpair failed and we were unable to recover it. 00:35:01.801 [2024-05-15 15:53:14.853503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.853671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.853699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.801 qpair failed and we were unable to recover it. 00:35:01.801 [2024-05-15 15:53:14.853867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.854028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.854054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.801 qpair failed and we were unable to recover it. 00:35:01.801 [2024-05-15 15:53:14.854189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.854396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.854440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.801 qpair failed and we were unable to recover it. 00:35:01.801 [2024-05-15 15:53:14.854601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.854830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.854876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.801 qpair failed and we were unable to recover it. 00:35:01.801 [2024-05-15 15:53:14.855020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.855135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.855163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.801 qpair failed and we were unable to recover it. 00:35:01.801 [2024-05-15 15:53:14.855310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.855446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.855472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.801 qpair failed and we were unable to recover it. 00:35:01.801 [2024-05-15 15:53:14.855645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.855837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.855863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.801 qpair failed and we were unable to recover it. 00:35:01.801 [2024-05-15 15:53:14.856025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.856163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.856189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.801 qpair failed and we were unable to recover it. 00:35:01.801 [2024-05-15 15:53:14.856365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.856543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.856587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.801 qpair failed and we were unable to recover it. 00:35:01.801 [2024-05-15 15:53:14.856751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.856904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.856930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.801 qpair failed and we were unable to recover it. 00:35:01.801 [2024-05-15 15:53:14.857061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.801 [2024-05-15 15:53:14.857210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.857251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.857438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.857620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.857662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.857799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.858000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.858042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.858182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.858356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.858386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.858589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.858741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.858767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.858936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.859046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.859072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.859183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.859329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.859375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.859568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.859775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.859819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.859962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.860105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.860132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.860330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.860558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.860613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.860740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.860896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.860923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.861067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.861211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.861249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.861441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.861590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.861633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.861770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.861933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.861959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.862101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.862247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.862274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.862381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.862525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.862552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.862696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.862841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.862868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.863011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.863133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.863159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.863296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.863479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.863526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.863696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.863889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.863915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.864057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.864173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.864200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.864345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.864533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.864562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.864714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.864896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.864923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.865055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.865159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.865184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.865335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.865516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.865543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.865682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.865810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.865854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.802 [2024-05-15 15:53:14.865997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.866108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.802 [2024-05-15 15:53:14.866134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.802 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.866298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.866424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.866450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.866613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.866743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.866785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.866957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.867076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.867101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.867239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.867436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.867480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.867647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.867776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.867804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.867946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.868086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.868112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.868268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.868461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.868507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.868674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.868813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.868840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.869008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.869123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.869149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.869262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.869427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.869453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.869572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.869717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.869743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.869858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.869998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.870025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.870165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.870320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.870347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.870512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.870656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.870683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.870831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.870971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.870998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.871140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.871272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.871302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.871446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.871605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.871631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.871797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.871937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.871963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.872100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.872241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.872268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.872423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.872594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.872636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.872770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.872908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.872934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.873100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.873222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.873254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.873417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.873604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.873634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.873837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.873991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.874018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.874135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.874298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.874327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.874498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.874668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.874711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.803 qpair failed and we were unable to recover it. 00:35:01.803 [2024-05-15 15:53:14.874820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.803 [2024-05-15 15:53:14.874936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.804 [2024-05-15 15:53:14.874962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.804 qpair failed and we were unable to recover it. 00:35:01.804 [2024-05-15 15:53:14.875079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.804 [2024-05-15 15:53:14.875228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.804 [2024-05-15 15:53:14.875255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.804 qpair failed and we were unable to recover it. 00:35:01.804 [2024-05-15 15:53:14.875423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.804 [2024-05-15 15:53:14.875579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.804 [2024-05-15 15:53:14.875621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.804 qpair failed and we were unable to recover it. 00:35:01.804 [2024-05-15 15:53:14.875754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.804 [2024-05-15 15:53:14.875864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.804 [2024-05-15 15:53:14.875891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.804 qpair failed and we were unable to recover it. 00:35:01.804 [2024-05-15 15:53:14.876035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.804 [2024-05-15 15:53:14.876151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.804 [2024-05-15 15:53:14.876179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.804 qpair failed and we were unable to recover it. 00:35:01.804 [2024-05-15 15:53:14.876354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.804 [2024-05-15 15:53:14.876500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.804 [2024-05-15 15:53:14.876528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.804 qpair failed and we were unable to recover it. 00:35:01.804 [2024-05-15 15:53:14.876680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.804 [2024-05-15 15:53:14.876785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.804 [2024-05-15 15:53:14.876810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.804 qpair failed and we were unable to recover it. 00:35:01.804 [2024-05-15 15:53:14.876925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.804 [2024-05-15 15:53:14.877061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.804 [2024-05-15 15:53:14.877088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.804 qpair failed and we were unable to recover it. 00:35:01.804 [2024-05-15 15:53:14.877255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.804 [2024-05-15 15:53:14.877371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.804 [2024-05-15 15:53:14.877398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:01.804 qpair failed and we were unable to recover it. 00:35:01.804 [2024-05-15 15:53:14.877545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.077 [2024-05-15 15:53:14.877687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.077 [2024-05-15 15:53:14.877713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.077 qpair failed and we were unable to recover it. 00:35:02.077 [2024-05-15 15:53:14.877854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.077 [2024-05-15 15:53:14.877961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.077 [2024-05-15 15:53:14.877987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.077 qpair failed and we were unable to recover it. 00:35:02.077 [2024-05-15 15:53:14.878133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.077 [2024-05-15 15:53:14.878290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.077 [2024-05-15 15:53:14.878334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.077 qpair failed and we were unable to recover it. 00:35:02.077 [2024-05-15 15:53:14.878497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.077 [2024-05-15 15:53:14.878633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.878659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.878805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.878923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.878951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.879089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.879203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.879237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.879370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.879554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.879599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.879766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.879941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.879971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.880110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.880268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.880299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.880514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.880762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.880830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.880972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.881111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.881138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.881278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.881424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.881450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.881609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.881771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.881802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.881965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.882104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.882131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.882275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.882436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.882479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.882637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.882751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.882778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.882938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.883082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.883109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.883249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.883403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.883434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.883624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.883805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.883831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.883962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.884127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.884152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.884314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.884457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.884500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.884646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.884835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.884878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.885044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.885189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.885222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.885371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.885512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.885538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.885683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.885824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.885852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.885994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.886133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.886160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.886297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.886492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.886519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.886696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.886858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.886891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.887057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.887173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.887201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.887349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.887509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.887564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.078 qpair failed and we were unable to recover it. 00:35:02.078 [2024-05-15 15:53:14.887742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.078 [2024-05-15 15:53:14.887905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.887931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.079 qpair failed and we were unable to recover it. 00:35:02.079 [2024-05-15 15:53:14.888048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.888185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.888212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.079 qpair failed and we were unable to recover it. 00:35:02.079 [2024-05-15 15:53:14.888395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.888546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.888589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.079 qpair failed and we were unable to recover it. 00:35:02.079 [2024-05-15 15:53:14.888742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.888911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.888937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.079 qpair failed and we were unable to recover it. 00:35:02.079 [2024-05-15 15:53:14.889053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.889190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.889224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.079 qpair failed and we were unable to recover it. 00:35:02.079 [2024-05-15 15:53:14.889395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.889597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.889641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.079 qpair failed and we were unable to recover it. 00:35:02.079 [2024-05-15 15:53:14.889798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.889950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.889976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.079 qpair failed and we were unable to recover it. 00:35:02.079 [2024-05-15 15:53:14.890109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.890252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.890283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.079 qpair failed and we were unable to recover it. 00:35:02.079 [2024-05-15 15:53:14.890445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.890604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.890691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.079 qpair failed and we were unable to recover it. 00:35:02.079 [2024-05-15 15:53:14.890799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.890960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.890986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.079 qpair failed and we were unable to recover it. 00:35:02.079 [2024-05-15 15:53:14.891097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.891220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.891247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.079 qpair failed and we were unable to recover it. 00:35:02.079 [2024-05-15 15:53:14.891420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.891586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.891612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.079 qpair failed and we were unable to recover it. 00:35:02.079 [2024-05-15 15:53:14.891775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.891930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.891955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.079 qpair failed and we were unable to recover it. 00:35:02.079 [2024-05-15 15:53:14.892091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.892201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.892235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.079 qpair failed and we were unable to recover it. 00:35:02.079 [2024-05-15 15:53:14.892373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.892542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.892584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.079 qpair failed and we were unable to recover it. 00:35:02.079 [2024-05-15 15:53:14.892785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.892945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.892971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.079 qpair failed and we were unable to recover it. 00:35:02.079 [2024-05-15 15:53:14.893085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.893268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.893298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.079 qpair failed and we were unable to recover it. 00:35:02.079 [2024-05-15 15:53:14.893482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.893656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.893699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.079 qpair failed and we were unable to recover it. 00:35:02.079 [2024-05-15 15:53:14.893844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.893987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.894013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.079 qpair failed and we were unable to recover it. 00:35:02.079 [2024-05-15 15:53:14.894150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.894268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.894294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.079 qpair failed and we were unable to recover it. 00:35:02.079 [2024-05-15 15:53:14.894406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.894567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.894612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.079 qpair failed and we were unable to recover it. 00:35:02.079 [2024-05-15 15:53:14.894771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.894955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.894982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.079 qpair failed and we were unable to recover it. 00:35:02.079 [2024-05-15 15:53:14.895128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.895324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.895368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.079 qpair failed and we were unable to recover it. 00:35:02.079 [2024-05-15 15:53:14.895544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.895681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.895707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.079 qpair failed and we were unable to recover it. 00:35:02.079 [2024-05-15 15:53:14.895845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.895986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.079 [2024-05-15 15:53:14.896012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.080 qpair failed and we were unable to recover it. 00:35:02.080 [2024-05-15 15:53:14.896186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.896335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.896362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.080 qpair failed and we were unable to recover it. 00:35:02.080 [2024-05-15 15:53:14.896473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.896610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.896637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.080 qpair failed and we were unable to recover it. 00:35:02.080 [2024-05-15 15:53:14.896781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.896892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.896919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.080 qpair failed and we were unable to recover it. 00:35:02.080 [2024-05-15 15:53:14.897045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.897164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.897191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.080 qpair failed and we were unable to recover it. 00:35:02.080 [2024-05-15 15:53:14.897396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.897541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.897570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.080 qpair failed and we were unable to recover it. 00:35:02.080 [2024-05-15 15:53:14.897789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.897954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.897980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.080 qpair failed and we were unable to recover it. 00:35:02.080 [2024-05-15 15:53:14.898094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.898241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.898268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.080 qpair failed and we were unable to recover it. 00:35:02.080 [2024-05-15 15:53:14.898388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.898517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.898543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.080 qpair failed and we were unable to recover it. 00:35:02.080 [2024-05-15 15:53:14.898681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.898820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.898846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.080 qpair failed and we were unable to recover it. 00:35:02.080 [2024-05-15 15:53:14.899011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.899152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.899178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.080 qpair failed and we were unable to recover it. 00:35:02.080 [2024-05-15 15:53:14.899326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.899441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.899468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.080 qpair failed and we were unable to recover it. 00:35:02.080 [2024-05-15 15:53:14.899606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.899739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.899765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.080 qpair failed and we were unable to recover it. 00:35:02.080 [2024-05-15 15:53:14.899933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.900072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.900099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.080 qpair failed and we were unable to recover it. 00:35:02.080 [2024-05-15 15:53:14.900244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.900403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.900449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.080 qpair failed and we were unable to recover it. 00:35:02.080 [2024-05-15 15:53:14.900614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.900877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.900906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.080 qpair failed and we were unable to recover it. 00:35:02.080 [2024-05-15 15:53:14.901062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.901206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.901244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.080 qpair failed and we were unable to recover it. 00:35:02.080 [2024-05-15 15:53:14.901399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.901604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.901648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.080 qpair failed and we were unable to recover it. 00:35:02.080 [2024-05-15 15:53:14.901805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.902025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.902051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.080 qpair failed and we were unable to recover it. 00:35:02.080 [2024-05-15 15:53:14.902192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.902372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.902398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.080 qpair failed and we were unable to recover it. 00:35:02.080 [2024-05-15 15:53:14.902531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.902670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.902697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.080 qpair failed and we were unable to recover it. 00:35:02.080 [2024-05-15 15:53:14.902867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.902991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.903019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.080 qpair failed and we were unable to recover it. 00:35:02.080 [2024-05-15 15:53:14.903180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.903338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.080 [2024-05-15 15:53:14.903382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.080 qpair failed and we were unable to recover it. 00:35:02.080 [2024-05-15 15:53:14.903545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.903744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.903793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.081 qpair failed and we were unable to recover it. 00:35:02.081 [2024-05-15 15:53:14.903961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.904133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.904160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.081 qpair failed and we were unable to recover it. 00:35:02.081 [2024-05-15 15:53:14.904331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.904458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.904485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.081 qpair failed and we were unable to recover it. 00:35:02.081 [2024-05-15 15:53:14.904641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.904822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.904849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.081 qpair failed and we were unable to recover it. 00:35:02.081 [2024-05-15 15:53:14.904988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.905127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.905153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.081 qpair failed and we were unable to recover it. 00:35:02.081 [2024-05-15 15:53:14.905316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.905490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.905534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.081 qpair failed and we were unable to recover it. 00:35:02.081 [2024-05-15 15:53:14.905693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.905847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.905872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.081 qpair failed and we were unable to recover it. 00:35:02.081 [2024-05-15 15:53:14.905979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.906097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.906124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.081 qpair failed and we were unable to recover it. 00:35:02.081 [2024-05-15 15:53:14.906266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.906432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.906481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.081 qpair failed and we were unable to recover it. 00:35:02.081 [2024-05-15 15:53:14.906680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.906820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.906846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.081 qpair failed and we were unable to recover it. 00:35:02.081 [2024-05-15 15:53:14.907012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.907150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.907177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.081 qpair failed and we were unable to recover it. 00:35:02.081 [2024-05-15 15:53:14.907380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.907554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.907600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.081 qpair failed and we were unable to recover it. 00:35:02.081 [2024-05-15 15:53:14.907757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.907915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.907942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.081 qpair failed and we were unable to recover it. 00:35:02.081 [2024-05-15 15:53:14.908107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.908265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.908295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.081 qpair failed and we were unable to recover it. 00:35:02.081 [2024-05-15 15:53:14.908439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.908601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.908630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.081 qpair failed and we were unable to recover it. 00:35:02.081 [2024-05-15 15:53:14.908797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.908929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.908955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.081 qpair failed and we were unable to recover it. 00:35:02.081 [2024-05-15 15:53:14.909062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.909198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.909234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.081 qpair failed and we were unable to recover it. 00:35:02.081 [2024-05-15 15:53:14.909377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.909551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.909595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.081 qpair failed and we were unable to recover it. 00:35:02.081 [2024-05-15 15:53:14.909784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.909962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.909989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.081 qpair failed and we were unable to recover it. 00:35:02.081 [2024-05-15 15:53:14.910114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.910251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.910288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.081 qpair failed and we were unable to recover it. 00:35:02.081 [2024-05-15 15:53:14.910447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.910633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.910676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.081 qpair failed and we were unable to recover it. 00:35:02.081 [2024-05-15 15:53:14.910820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.910975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.911003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.081 qpair failed and we were unable to recover it. 00:35:02.081 [2024-05-15 15:53:14.911167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.081 [2024-05-15 15:53:14.911313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.911357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.082 qpair failed and we were unable to recover it. 00:35:02.082 [2024-05-15 15:53:14.911493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.911705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.911778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.082 qpair failed and we were unable to recover it. 00:35:02.082 [2024-05-15 15:53:14.911953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.912131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.912158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.082 qpair failed and we were unable to recover it. 00:35:02.082 [2024-05-15 15:53:14.912338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.912532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.912571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.082 qpair failed and we were unable to recover it. 00:35:02.082 [2024-05-15 15:53:14.912737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.912871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.912897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.082 qpair failed and we were unable to recover it. 00:35:02.082 [2024-05-15 15:53:14.913037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.913150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.913175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.082 qpair failed and we were unable to recover it. 00:35:02.082 [2024-05-15 15:53:14.913324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.913448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.913477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.082 qpair failed and we were unable to recover it. 00:35:02.082 [2024-05-15 15:53:14.913637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.913778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.913804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.082 qpair failed and we were unable to recover it. 00:35:02.082 [2024-05-15 15:53:14.913974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.914147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.914173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.082 qpair failed and we were unable to recover it. 00:35:02.082 [2024-05-15 15:53:14.914340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.914483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.914509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.082 qpair failed and we were unable to recover it. 00:35:02.082 [2024-05-15 15:53:14.914701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.914865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.914891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.082 qpair failed and we were unable to recover it. 00:35:02.082 [2024-05-15 15:53:14.915054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.915196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.915230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.082 qpair failed and we were unable to recover it. 00:35:02.082 [2024-05-15 15:53:14.915345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.915508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.915533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.082 qpair failed and we were unable to recover it. 00:35:02.082 [2024-05-15 15:53:14.915690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.915904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.915930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.082 qpair failed and we were unable to recover it. 00:35:02.082 [2024-05-15 15:53:14.916063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.916171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.916197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.082 qpair failed and we were unable to recover it. 00:35:02.082 [2024-05-15 15:53:14.916338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.916554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.916580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.082 qpair failed and we were unable to recover it. 00:35:02.082 [2024-05-15 15:53:14.916693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.916808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.916835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.082 qpair failed and we were unable to recover it. 00:35:02.082 [2024-05-15 15:53:14.916995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.917135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.917160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.082 qpair failed and we were unable to recover it. 00:35:02.082 [2024-05-15 15:53:14.917302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.917470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.917496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.082 qpair failed and we were unable to recover it. 00:35:02.082 [2024-05-15 15:53:14.917641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.917812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.917855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.082 qpair failed and we were unable to recover it. 00:35:02.082 [2024-05-15 15:53:14.917976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.918115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.918142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.082 qpair failed and we were unable to recover it. 00:35:02.082 [2024-05-15 15:53:14.918280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.918414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.918442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.082 qpair failed and we were unable to recover it. 00:35:02.082 [2024-05-15 15:53:14.918592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.918736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.082 [2024-05-15 15:53:14.918771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.083 qpair failed and we were unable to recover it. 00:35:02.083 [2024-05-15 15:53:14.918911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.919058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.919085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.083 qpair failed and we were unable to recover it. 00:35:02.083 [2024-05-15 15:53:14.919236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.919389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.919415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.083 qpair failed and we were unable to recover it. 00:35:02.083 [2024-05-15 15:53:14.919537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.919671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.919697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.083 qpair failed and we were unable to recover it. 00:35:02.083 [2024-05-15 15:53:14.919838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.919952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.919977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.083 qpair failed and we were unable to recover it. 00:35:02.083 [2024-05-15 15:53:14.920119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.920265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.920293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.083 qpair failed and we were unable to recover it. 00:35:02.083 [2024-05-15 15:53:14.920416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.920555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.920582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.083 qpair failed and we were unable to recover it. 00:35:02.083 [2024-05-15 15:53:14.920689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.920857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.920883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.083 qpair failed and we were unable to recover it. 00:35:02.083 [2024-05-15 15:53:14.921052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.921195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.921232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.083 qpair failed and we were unable to recover it. 00:35:02.083 [2024-05-15 15:53:14.921359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.921525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.921552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.083 qpair failed and we were unable to recover it. 00:35:02.083 [2024-05-15 15:53:14.921718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.921856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.921882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.083 qpair failed and we were unable to recover it. 00:35:02.083 [2024-05-15 15:53:14.922047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.922185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.922211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.083 qpair failed and we were unable to recover it. 00:35:02.083 [2024-05-15 15:53:14.922353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.922467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.922495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.083 qpair failed and we were unable to recover it. 00:35:02.083 [2024-05-15 15:53:14.922637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.922800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.922829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.083 qpair failed and we were unable to recover it. 00:35:02.083 [2024-05-15 15:53:14.922951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.923075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.923101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.083 qpair failed and we were unable to recover it. 00:35:02.083 [2024-05-15 15:53:14.923264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.923408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.923435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.083 qpair failed and we were unable to recover it. 00:35:02.083 [2024-05-15 15:53:14.923570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.923739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.923781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.083 qpair failed and we were unable to recover it. 00:35:02.083 [2024-05-15 15:53:14.923892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.924019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.924045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.083 qpair failed and we were unable to recover it. 00:35:02.083 [2024-05-15 15:53:14.924188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.924331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.924357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.083 qpair failed and we were unable to recover it. 00:35:02.083 [2024-05-15 15:53:14.924475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.924616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.924644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.083 qpair failed and we were unable to recover it. 00:35:02.083 [2024-05-15 15:53:14.924812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.924952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.924978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.083 qpair failed and we were unable to recover it. 00:35:02.083 [2024-05-15 15:53:14.925087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.925198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.083 [2024-05-15 15:53:14.925236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.083 qpair failed and we were unable to recover it. 00:35:02.084 [2024-05-15 15:53:14.925377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.925567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.925594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.084 qpair failed and we were unable to recover it. 00:35:02.084 [2024-05-15 15:53:14.925735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.925875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.925902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.084 qpair failed and we were unable to recover it. 00:35:02.084 [2024-05-15 15:53:14.926023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.926167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.926193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.084 qpair failed and we were unable to recover it. 00:35:02.084 [2024-05-15 15:53:14.926337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.926499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.926526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.084 qpair failed and we were unable to recover it. 00:35:02.084 [2024-05-15 15:53:14.926670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.926813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.926840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.084 qpair failed and we were unable to recover it. 00:35:02.084 [2024-05-15 15:53:14.926946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.927085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.927110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.084 qpair failed and we were unable to recover it. 00:35:02.084 [2024-05-15 15:53:14.927256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.927403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.927428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.084 qpair failed and we were unable to recover it. 00:35:02.084 [2024-05-15 15:53:14.927533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.927641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.927666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.084 qpair failed and we were unable to recover it. 00:35:02.084 [2024-05-15 15:53:14.927780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.927945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.927971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.084 qpair failed and we were unable to recover it. 00:35:02.084 [2024-05-15 15:53:14.928114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.928255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.928283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.084 qpair failed and we were unable to recover it. 00:35:02.084 [2024-05-15 15:53:14.928427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.928563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.928589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.084 qpair failed and we were unable to recover it. 00:35:02.084 [2024-05-15 15:53:14.928729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.928862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.928889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.084 qpair failed and we were unable to recover it. 00:35:02.084 [2024-05-15 15:53:14.929073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.929212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.929253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.084 qpair failed and we were unable to recover it. 00:35:02.084 [2024-05-15 15:53:14.929439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.929613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.929657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.084 qpair failed and we were unable to recover it. 00:35:02.084 [2024-05-15 15:53:14.929800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.929943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.929969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.084 qpair failed and we were unable to recover it. 00:35:02.084 [2024-05-15 15:53:14.930107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.930244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.930275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.084 qpair failed and we were unable to recover it. 00:35:02.084 [2024-05-15 15:53:14.930413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.930563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.930607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.084 qpair failed and we were unable to recover it. 00:35:02.084 [2024-05-15 15:53:14.930749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.930886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.930912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.084 qpair failed and we were unable to recover it. 00:35:02.084 [2024-05-15 15:53:14.931080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.931242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.084 [2024-05-15 15:53:14.931280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.084 qpair failed and we were unable to recover it. 00:35:02.084 [2024-05-15 15:53:14.931468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.085 [2024-05-15 15:53:14.931799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.085 [2024-05-15 15:53:14.931852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.085 qpair failed and we were unable to recover it. 00:35:02.085 [2024-05-15 15:53:14.931992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.085 [2024-05-15 15:53:14.932132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.085 [2024-05-15 15:53:14.932158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.085 qpair failed and we were unable to recover it. 00:35:02.085 [2024-05-15 15:53:14.932344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.085 [2024-05-15 15:53:14.932590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.085 [2024-05-15 15:53:14.932644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.085 qpair failed and we were unable to recover it. 00:35:02.085 [2024-05-15 15:53:14.932776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.085 [2024-05-15 15:53:14.932962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.085 [2024-05-15 15:53:14.932988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.085 qpair failed and we were unable to recover it. 00:35:02.085 [2024-05-15 15:53:14.933105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.085 [2024-05-15 15:53:14.933228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.085 [2024-05-15 15:53:14.933259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.085 qpair failed and we were unable to recover it. 00:35:02.085 [2024-05-15 15:53:14.933389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.085 [2024-05-15 15:53:14.933559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.085 [2024-05-15 15:53:14.933585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.085 qpair failed and we were unable to recover it. 00:35:02.085 [2024-05-15 15:53:14.933732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.085 [2024-05-15 15:53:14.933912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.085 [2024-05-15 15:53:14.933942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.085 qpair failed and we were unable to recover it. 00:35:02.085 [2024-05-15 15:53:14.934081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.085 [2024-05-15 15:53:14.934226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.085 [2024-05-15 15:53:14.934254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.085 qpair failed and we were unable to recover it. 00:35:02.085 [2024-05-15 15:53:14.934426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.085 [2024-05-15 15:53:14.934633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.085 [2024-05-15 15:53:14.934676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.085 qpair failed and we were unable to recover it. 00:35:02.085 [2024-05-15 15:53:14.934866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.085 [2024-05-15 15:53:14.935022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.085 [2024-05-15 15:53:14.935049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.085 qpair failed and we were unable to recover it. 00:35:02.085 [2024-05-15 15:53:14.935164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.085 [2024-05-15 15:53:14.935329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.085 [2024-05-15 15:53:14.935372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.085 qpair failed and we were unable to recover it. 00:35:02.085 [2024-05-15 15:53:14.935548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.085 [2024-05-15 15:53:14.935703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.935748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.086 qpair failed and we were unable to recover it. 00:35:02.086 [2024-05-15 15:53:14.935911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.936053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.936080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.086 qpair failed and we were unable to recover it. 00:35:02.086 [2024-05-15 15:53:14.936265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.936453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.936506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.086 qpair failed and we were unable to recover it. 00:35:02.086 [2024-05-15 15:53:14.936659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.936839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.936866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.086 qpair failed and we were unable to recover it. 00:35:02.086 [2024-05-15 15:53:14.937008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.937179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.937205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.086 qpair failed and we were unable to recover it. 00:35:02.086 [2024-05-15 15:53:14.937370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.937542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.937592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.086 qpair failed and we were unable to recover it. 00:35:02.086 [2024-05-15 15:53:14.937700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.937869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.937895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.086 qpair failed and we were unable to recover it. 00:35:02.086 [2024-05-15 15:53:14.938032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.938183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.938210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.086 qpair failed and we were unable to recover it. 00:35:02.086 [2024-05-15 15:53:14.938353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.938511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.938559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.086 qpair failed and we were unable to recover it. 00:35:02.086 [2024-05-15 15:53:14.938690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.938867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.938897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.086 qpair failed and we were unable to recover it. 00:35:02.086 [2024-05-15 15:53:14.939076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.939221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.939249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.086 qpair failed and we were unable to recover it. 00:35:02.086 [2024-05-15 15:53:14.939424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.939596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.939641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.086 qpair failed and we were unable to recover it. 00:35:02.086 [2024-05-15 15:53:14.939829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.939959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.939987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.086 qpair failed and we were unable to recover it. 00:35:02.086 [2024-05-15 15:53:14.940149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.940282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.086 [2024-05-15 15:53:14.940310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.086 qpair failed and we were unable to recover it. 00:35:02.087 [2024-05-15 15:53:14.940481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.940670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.940715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.087 qpair failed and we were unable to recover it. 00:35:02.087 [2024-05-15 15:53:14.940891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.941024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.941057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.087 qpair failed and we were unable to recover it. 00:35:02.087 [2024-05-15 15:53:14.941203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.941358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.941404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.087 qpair failed and we were unable to recover it. 00:35:02.087 [2024-05-15 15:53:14.941566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.941700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.941727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.087 qpair failed and we were unable to recover it. 00:35:02.087 [2024-05-15 15:53:14.941835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.941951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.941978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.087 qpair failed and we were unable to recover it. 00:35:02.087 [2024-05-15 15:53:14.942126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.942268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.942294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.087 qpair failed and we were unable to recover it. 00:35:02.087 [2024-05-15 15:53:14.942418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.942587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.942613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.087 qpair failed and we were unable to recover it. 00:35:02.087 [2024-05-15 15:53:14.942726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.942886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.942913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.087 qpair failed and we were unable to recover it. 00:35:02.087 [2024-05-15 15:53:14.943082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.943189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.943223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.087 qpair failed and we were unable to recover it. 00:35:02.087 [2024-05-15 15:53:14.943380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.943618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.943674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.087 qpair failed and we were unable to recover it. 00:35:02.087 [2024-05-15 15:53:14.943830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.944008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.944035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.087 qpair failed and we were unable to recover it. 00:35:02.087 [2024-05-15 15:53:14.944177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.944319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.944363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.087 qpair failed and we were unable to recover it. 00:35:02.087 [2024-05-15 15:53:14.944504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.944689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.944716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.087 qpair failed and we were unable to recover it. 00:35:02.087 [2024-05-15 15:53:14.944872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.945052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.945078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.087 qpair failed and we were unable to recover it. 00:35:02.087 [2024-05-15 15:53:14.945226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.945366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.945414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.087 qpair failed and we were unable to recover it. 00:35:02.087 [2024-05-15 15:53:14.945562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.945782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.945809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.087 qpair failed and we were unable to recover it. 00:35:02.087 [2024-05-15 15:53:14.945928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.946045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.946072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.087 qpair failed and we were unable to recover it. 00:35:02.087 [2024-05-15 15:53:14.946213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.946436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.946480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.087 qpair failed and we were unable to recover it. 00:35:02.087 [2024-05-15 15:53:14.946649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.087 [2024-05-15 15:53:14.946802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.946829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.088 qpair failed and we were unable to recover it. 00:35:02.088 [2024-05-15 15:53:14.946963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.947083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.947110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.088 qpair failed and we were unable to recover it. 00:35:02.088 [2024-05-15 15:53:14.947275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.947425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.947472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.088 qpair failed and we were unable to recover it. 00:35:02.088 [2024-05-15 15:53:14.947621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.947792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.947818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.088 qpair failed and we were unable to recover it. 00:35:02.088 [2024-05-15 15:53:14.947964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.948133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.948160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.088 qpair failed and we were unable to recover it. 00:35:02.088 [2024-05-15 15:53:14.948299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.948504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.948547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.088 qpair failed and we were unable to recover it. 00:35:02.088 [2024-05-15 15:53:14.948710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.948872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.948899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.088 qpair failed and we were unable to recover it. 00:35:02.088 [2024-05-15 15:53:14.949039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.949181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.949207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.088 qpair failed and we were unable to recover it. 00:35:02.088 [2024-05-15 15:53:14.949349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.949532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.949575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.088 qpair failed and we were unable to recover it. 00:35:02.088 [2024-05-15 15:53:14.949777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.949919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.949946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.088 qpair failed and we were unable to recover it. 00:35:02.088 [2024-05-15 15:53:14.950109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.950274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.950301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.088 qpair failed and we were unable to recover it. 00:35:02.088 [2024-05-15 15:53:14.950435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.950599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.950626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.088 qpair failed and we were unable to recover it. 00:35:02.088 [2024-05-15 15:53:14.950790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.950919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.950945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.088 qpair failed and we were unable to recover it. 00:35:02.088 [2024-05-15 15:53:14.951080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.951224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.951263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.088 qpair failed and we were unable to recover it. 00:35:02.088 [2024-05-15 15:53:14.951387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.951552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.088 [2024-05-15 15:53:14.951578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.089 qpair failed and we were unable to recover it. 00:35:02.089 [2024-05-15 15:53:14.951711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.951846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.951873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.089 qpair failed and we were unable to recover it. 00:35:02.089 [2024-05-15 15:53:14.952010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.952185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.952211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.089 qpair failed and we were unable to recover it. 00:35:02.089 [2024-05-15 15:53:14.952387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.952564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.952607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.089 qpair failed and we were unable to recover it. 00:35:02.089 [2024-05-15 15:53:14.952807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.952965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.952992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.089 qpair failed and we were unable to recover it. 00:35:02.089 [2024-05-15 15:53:14.953158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.953344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.953387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.089 qpair failed and we were unable to recover it. 00:35:02.089 [2024-05-15 15:53:14.953560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.953766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.953810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.089 qpair failed and we were unable to recover it. 00:35:02.089 [2024-05-15 15:53:14.953941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.954079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.954107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.089 qpair failed and we were unable to recover it. 00:35:02.089 [2024-05-15 15:53:14.954238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.954395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.954439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.089 qpair failed and we were unable to recover it. 00:35:02.089 [2024-05-15 15:53:14.954603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.954761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.954787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.089 qpair failed and we were unable to recover it. 00:35:02.089 [2024-05-15 15:53:14.954963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.955098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.955124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.089 qpair failed and we were unable to recover it. 00:35:02.089 [2024-05-15 15:53:14.955232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.955374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.955402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.089 qpair failed and we were unable to recover it. 00:35:02.089 [2024-05-15 15:53:14.955579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.955850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.955904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.089 qpair failed and we were unable to recover it. 00:35:02.089 [2024-05-15 15:53:14.956043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.956186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.089 [2024-05-15 15:53:14.956213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.089 qpair failed and we were unable to recover it. 00:35:02.089 [2024-05-15 15:53:14.956350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.956552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.956596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.090 qpair failed and we were unable to recover it. 00:35:02.090 [2024-05-15 15:53:14.956831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.956993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.957019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.090 qpair failed and we were unable to recover it. 00:35:02.090 [2024-05-15 15:53:14.957135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.957312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.957357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.090 qpair failed and we were unable to recover it. 00:35:02.090 [2024-05-15 15:53:14.957501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.957642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.957668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.090 qpair failed and we were unable to recover it. 00:35:02.090 [2024-05-15 15:53:14.957796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.957952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.957979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.090 qpair failed and we were unable to recover it. 00:35:02.090 [2024-05-15 15:53:14.958094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.958260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.958287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.090 qpair failed and we were unable to recover it. 00:35:02.090 [2024-05-15 15:53:14.958431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.958567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.958593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.090 qpair failed and we were unable to recover it. 00:35:02.090 [2024-05-15 15:53:14.958756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.958866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.958892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.090 qpair failed and we were unable to recover it. 00:35:02.090 [2024-05-15 15:53:14.959026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.959143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.959169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.090 qpair failed and we were unable to recover it. 00:35:02.090 [2024-05-15 15:53:14.959317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.959438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.959464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.090 qpair failed and we were unable to recover it. 00:35:02.090 [2024-05-15 15:53:14.959577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.959713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.959739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.090 qpair failed and we were unable to recover it. 00:35:02.090 [2024-05-15 15:53:14.959904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.960023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.960049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.090 qpair failed and we were unable to recover it. 00:35:02.090 [2024-05-15 15:53:14.960160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.090 [2024-05-15 15:53:14.960322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.091 [2024-05-15 15:53:14.960365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.091 qpair failed and we were unable to recover it. 00:35:02.091 [2024-05-15 15:53:14.960532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.091 [2024-05-15 15:53:14.960788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.091 [2024-05-15 15:53:14.960831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.091 qpair failed and we were unable to recover it. 00:35:02.091 [2024-05-15 15:53:14.960973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.091 [2024-05-15 15:53:14.961119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.091 [2024-05-15 15:53:14.961145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.091 qpair failed and we were unable to recover it. 00:35:02.091 [2024-05-15 15:53:14.961290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.091 [2024-05-15 15:53:14.961444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.091 [2024-05-15 15:53:14.961497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.091 qpair failed and we were unable to recover it. 00:35:02.091 [2024-05-15 15:53:14.961689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.091 [2024-05-15 15:53:14.961843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.091 [2024-05-15 15:53:14.961870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.091 qpair failed and we were unable to recover it. 00:35:02.091 [2024-05-15 15:53:14.962017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.091 [2024-05-15 15:53:14.962136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.091 [2024-05-15 15:53:14.962167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.091 qpair failed and we were unable to recover it. 00:35:02.091 [2024-05-15 15:53:14.962311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.091 [2024-05-15 15:53:14.962487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.091 [2024-05-15 15:53:14.962536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.091 qpair failed and we were unable to recover it. 00:35:02.091 [2024-05-15 15:53:14.962694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.091 [2024-05-15 15:53:14.962860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.091 [2024-05-15 15:53:14.962887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.091 qpair failed and we were unable to recover it. 00:35:02.091 [2024-05-15 15:53:14.963007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.091 [2024-05-15 15:53:14.963145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.091 [2024-05-15 15:53:14.963178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.091 qpair failed and we were unable to recover it. 00:35:02.091 [2024-05-15 15:53:14.963357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.091 [2024-05-15 15:53:14.963541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.091 [2024-05-15 15:53:14.963571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.092 qpair failed and we were unable to recover it. 00:35:02.092 [2024-05-15 15:53:14.963752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.092 [2024-05-15 15:53:14.963881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.092 [2024-05-15 15:53:14.963907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.092 qpair failed and we were unable to recover it. 00:35:02.092 [2024-05-15 15:53:14.964070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.092 [2024-05-15 15:53:14.964204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.092 [2024-05-15 15:53:14.964265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.092 qpair failed and we were unable to recover it. 00:35:02.092 [2024-05-15 15:53:14.964453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.092 [2024-05-15 15:53:14.964606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.092 [2024-05-15 15:53:14.964635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.092 qpair failed and we were unable to recover it. 00:35:02.092 [2024-05-15 15:53:14.964895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.092 [2024-05-15 15:53:14.965077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.092 [2024-05-15 15:53:14.965104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.092 qpair failed and we were unable to recover it. 00:35:02.092 [2024-05-15 15:53:14.965227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.092 [2024-05-15 15:53:14.965412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.092 [2024-05-15 15:53:14.965457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.092 qpair failed and we were unable to recover it. 00:35:02.092 [2024-05-15 15:53:14.965623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.092 [2024-05-15 15:53:14.965897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.092 [2024-05-15 15:53:14.965951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.092 qpair failed and we were unable to recover it. 00:35:02.092 [2024-05-15 15:53:14.966099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.092 [2024-05-15 15:53:14.966241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.092 [2024-05-15 15:53:14.966269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.092 qpair failed and we were unable to recover it. 00:35:02.092 [2024-05-15 15:53:14.966433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.092 [2024-05-15 15:53:14.966642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.092 [2024-05-15 15:53:14.966685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.092 qpair failed and we were unable to recover it. 00:35:02.092 [2024-05-15 15:53:14.966857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.092 [2024-05-15 15:53:14.966964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.092 [2024-05-15 15:53:14.966990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.093 qpair failed and we were unable to recover it. 00:35:02.093 [2024-05-15 15:53:14.967132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.967292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.967337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.093 qpair failed and we were unable to recover it. 00:35:02.093 [2024-05-15 15:53:14.967450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.967600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.967627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.093 qpair failed and we were unable to recover it. 00:35:02.093 [2024-05-15 15:53:14.967766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.967899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.967926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.093 qpair failed and we were unable to recover it. 00:35:02.093 [2024-05-15 15:53:14.968037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.968173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.968201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.093 qpair failed and we were unable to recover it. 00:35:02.093 [2024-05-15 15:53:14.968383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.968592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.968644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.093 qpair failed and we were unable to recover it. 00:35:02.093 [2024-05-15 15:53:14.968812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.968968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.968995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.093 qpair failed and we were unable to recover it. 00:35:02.093 [2024-05-15 15:53:14.969159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.969317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.969367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.093 qpair failed and we were unable to recover it. 00:35:02.093 [2024-05-15 15:53:14.969553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.969701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.969751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.093 qpair failed and we were unable to recover it. 00:35:02.093 [2024-05-15 15:53:14.969924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.970064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.970091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.093 qpair failed and we were unable to recover it. 00:35:02.093 [2024-05-15 15:53:14.970204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.970422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.970467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.093 qpair failed and we were unable to recover it. 00:35:02.093 [2024-05-15 15:53:14.970762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.970938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.970965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.093 qpair failed and we were unable to recover it. 00:35:02.093 [2024-05-15 15:53:14.971112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.971282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.971309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.093 qpair failed and we were unable to recover it. 00:35:02.093 [2024-05-15 15:53:14.971446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.971555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.971581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.093 qpair failed and we were unable to recover it. 00:35:02.093 [2024-05-15 15:53:14.971719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.971861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.971888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.093 qpair failed and we were unable to recover it. 00:35:02.093 [2024-05-15 15:53:14.972058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.093 [2024-05-15 15:53:14.972200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.094 [2024-05-15 15:53:14.972233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.094 qpair failed and we were unable to recover it. 00:35:02.094 [2024-05-15 15:53:14.972384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.094 [2024-05-15 15:53:14.972569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.094 [2024-05-15 15:53:14.972596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.094 qpair failed and we were unable to recover it. 00:35:02.094 [2024-05-15 15:53:14.972703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.094 [2024-05-15 15:53:14.972849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.094 [2024-05-15 15:53:14.972875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.094 qpair failed and we were unable to recover it. 00:35:02.094 [2024-05-15 15:53:14.973049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.094 [2024-05-15 15:53:14.973188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.094 [2024-05-15 15:53:14.973223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.094 qpair failed and we were unable to recover it. 00:35:02.094 [2024-05-15 15:53:14.973405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.094 [2024-05-15 15:53:14.973595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.094 [2024-05-15 15:53:14.973637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.094 qpair failed and we were unable to recover it. 00:35:02.094 [2024-05-15 15:53:14.973788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.094 [2024-05-15 15:53:14.973991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.094 [2024-05-15 15:53:14.974034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.094 qpair failed and we were unable to recover it. 00:35:02.094 [2024-05-15 15:53:14.974211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.094 [2024-05-15 15:53:14.974375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.094 [2024-05-15 15:53:14.974418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.094 qpair failed and we were unable to recover it. 00:35:02.095 [2024-05-15 15:53:14.974587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.974770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.974818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.095 qpair failed and we were unable to recover it. 00:35:02.095 [2024-05-15 15:53:14.975000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.975158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.975185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.095 qpair failed and we were unable to recover it. 00:35:02.095 [2024-05-15 15:53:14.975343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.975484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.975514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.095 qpair failed and we were unable to recover it. 00:35:02.095 [2024-05-15 15:53:14.975696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.975870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.975914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.095 qpair failed and we were unable to recover it. 00:35:02.095 [2024-05-15 15:53:14.976079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.976197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.976230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.095 qpair failed and we were unable to recover it. 00:35:02.095 [2024-05-15 15:53:14.976428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.976666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.976720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.095 qpair failed and we were unable to recover it. 00:35:02.095 [2024-05-15 15:53:14.976879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.977036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.977062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.095 qpair failed and we were unable to recover it. 00:35:02.095 [2024-05-15 15:53:14.977205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.977394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.977439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.095 qpair failed and we were unable to recover it. 00:35:02.095 [2024-05-15 15:53:14.977572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.977773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.977815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.095 qpair failed and we were unable to recover it. 00:35:02.095 [2024-05-15 15:53:14.977982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.978164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.978190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.095 qpair failed and we were unable to recover it. 00:35:02.095 [2024-05-15 15:53:14.978341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.978523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.978552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.095 qpair failed and we were unable to recover it. 00:35:02.095 [2024-05-15 15:53:14.978725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.978930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.978975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.095 qpair failed and we were unable to recover it. 00:35:02.095 [2024-05-15 15:53:14.979088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.979227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.979255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.095 qpair failed and we were unable to recover it. 00:35:02.095 [2024-05-15 15:53:14.979408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.979563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.979608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.095 qpair failed and we were unable to recover it. 00:35:02.095 [2024-05-15 15:53:14.979741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.979908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.979936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.095 qpair failed and we were unable to recover it. 00:35:02.095 [2024-05-15 15:53:14.980110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.980231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.980259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.095 qpair failed and we were unable to recover it. 00:35:02.095 [2024-05-15 15:53:14.980423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.980563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.980590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.095 qpair failed and we were unable to recover it. 00:35:02.095 [2024-05-15 15:53:14.980755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.980912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.095 [2024-05-15 15:53:14.980938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.095 qpair failed and we were unable to recover it. 00:35:02.095 [2024-05-15 15:53:14.981060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.981229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.981261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.096 qpair failed and we were unable to recover it. 00:35:02.096 [2024-05-15 15:53:14.981453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.981601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.981645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.096 qpair failed and we were unable to recover it. 00:35:02.096 [2024-05-15 15:53:14.981805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.981963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.981991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.096 qpair failed and we were unable to recover it. 00:35:02.096 [2024-05-15 15:53:14.982134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.982331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.982376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.096 qpair failed and we were unable to recover it. 00:35:02.096 [2024-05-15 15:53:14.982538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.982811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.982863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.096 qpair failed and we were unable to recover it. 00:35:02.096 [2024-05-15 15:53:14.983007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.983143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.983170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.096 qpair failed and we were unable to recover it. 00:35:02.096 [2024-05-15 15:53:14.983321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.983514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.983558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.096 qpair failed and we were unable to recover it. 00:35:02.096 [2024-05-15 15:53:14.983846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.984048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.984075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.096 qpair failed and we were unable to recover it. 00:35:02.096 [2024-05-15 15:53:14.984225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.984415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.984460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.096 qpair failed and we were unable to recover it. 00:35:02.096 [2024-05-15 15:53:14.984616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.984884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.984943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.096 qpair failed and we were unable to recover it. 00:35:02.096 [2024-05-15 15:53:14.985065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.985181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.985207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.096 qpair failed and we were unable to recover it. 00:35:02.096 [2024-05-15 15:53:14.985422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.985607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.985650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.096 qpair failed and we were unable to recover it. 00:35:02.096 [2024-05-15 15:53:14.985779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.985913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.096 [2024-05-15 15:53:14.985940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.097 qpair failed and we were unable to recover it. 00:35:02.097 [2024-05-15 15:53:14.986081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.986194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.986243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.097 qpair failed and we were unable to recover it. 00:35:02.097 [2024-05-15 15:53:14.986395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.986570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.986613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.097 qpair failed and we were unable to recover it. 00:35:02.097 [2024-05-15 15:53:14.986766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.986900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.986927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.097 qpair failed and we were unable to recover it. 00:35:02.097 [2024-05-15 15:53:14.987044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.987187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.987228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.097 qpair failed and we were unable to recover it. 00:35:02.097 [2024-05-15 15:53:14.987367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.987512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.987539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.097 qpair failed and we were unable to recover it. 00:35:02.097 [2024-05-15 15:53:14.987685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.987822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.987848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.097 qpair failed and we were unable to recover it. 00:35:02.097 [2024-05-15 15:53:14.987990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.988154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.988180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.097 qpair failed and we were unable to recover it. 00:35:02.097 [2024-05-15 15:53:14.988344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.988549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.988594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.097 qpair failed and we were unable to recover it. 00:35:02.097 [2024-05-15 15:53:14.988785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.988945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.988974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.097 qpair failed and we were unable to recover it. 00:35:02.097 [2024-05-15 15:53:14.989129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.989288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.989334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.097 qpair failed and we were unable to recover it. 00:35:02.097 [2024-05-15 15:53:14.989500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.989753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.989812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.097 qpair failed and we were unable to recover it. 00:35:02.097 [2024-05-15 15:53:14.989954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.990072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.990100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.097 qpair failed and we were unable to recover it. 00:35:02.097 [2024-05-15 15:53:14.990245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.990412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.990455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.097 qpair failed and we were unable to recover it. 00:35:02.097 [2024-05-15 15:53:14.990589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.990748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.990786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.097 qpair failed and we were unable to recover it. 00:35:02.097 [2024-05-15 15:53:14.990931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.097 [2024-05-15 15:53:14.991046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.991077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.098 qpair failed and we were unable to recover it. 00:35:02.098 [2024-05-15 15:53:14.991261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.991402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.991432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.098 qpair failed and we were unable to recover it. 00:35:02.098 [2024-05-15 15:53:14.991593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.991728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.991754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.098 qpair failed and we were unable to recover it. 00:35:02.098 [2024-05-15 15:53:14.991886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.992048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.992074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.098 qpair failed and we were unable to recover it. 00:35:02.098 [2024-05-15 15:53:14.992192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.992344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.992371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.098 qpair failed and we were unable to recover it. 00:35:02.098 [2024-05-15 15:53:14.992538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.992691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.992733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.098 qpair failed and we were unable to recover it. 00:35:02.098 [2024-05-15 15:53:14.992879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.993040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.993067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.098 qpair failed and we were unable to recover it. 00:35:02.098 [2024-05-15 15:53:14.993268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.993385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.993412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.098 qpair failed and we were unable to recover it. 00:35:02.098 [2024-05-15 15:53:14.993559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.993725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.993751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.098 qpair failed and we were unable to recover it. 00:35:02.098 [2024-05-15 15:53:14.993863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.993974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.994004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.098 qpair failed and we were unable to recover it. 00:35:02.098 [2024-05-15 15:53:14.994149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.994265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.994292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.098 qpair failed and we were unable to recover it. 00:35:02.098 [2024-05-15 15:53:14.994406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.994547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.994574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.098 qpair failed and we were unable to recover it. 00:35:02.098 [2024-05-15 15:53:14.994723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.994856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.994882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.098 qpair failed and we were unable to recover it. 00:35:02.098 [2024-05-15 15:53:14.995050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.995222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.995249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.098 qpair failed and we were unable to recover it. 00:35:02.098 [2024-05-15 15:53:14.995409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.995605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.995672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.098 qpair failed and we were unable to recover it. 00:35:02.098 [2024-05-15 15:53:14.995835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.995996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.098 [2024-05-15 15:53:14.996023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.098 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:14.996186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:14.996328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:14.996384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:14.996580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:14.996725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:14.996769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:14.996915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:14.997059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:14.997086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:14.997202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:14.997338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:14.997370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:14.997535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:14.997679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:14.997725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:14.997892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:14.998034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:14.998062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:14.998201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:14.998404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:14.998449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:14.998633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:14.998811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:14.998855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:14.999023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:14.999169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:14.999195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:14.999337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:14.999509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:14.999552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:14.999706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:14.999884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:14.999929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:15.000071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.000234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.000262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:15.000422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.000574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.000618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:15.000779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.000910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.000937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:15.001103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.001289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.001320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:15.001499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.001641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.001670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:15.001835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.002003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.002030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:15.002173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.002351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.002378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:15.002517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.002640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.002683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:15.002870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.003046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.003072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:15.003210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.003406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.003435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:15.003637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.003813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.003856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:15.003998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.004173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.004200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:15.004386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.004576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.004619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:15.004917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.005095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.005123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:15.005302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.005513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.005557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.099 [2024-05-15 15:53:15.005713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.005852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.099 [2024-05-15 15:53:15.005896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.099 qpair failed and we were unable to recover it. 00:35:02.100 [2024-05-15 15:53:15.006043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.006180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.006207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.100 qpair failed and we were unable to recover it. 00:35:02.100 [2024-05-15 15:53:15.006367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.006483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.006510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.100 qpair failed and we were unable to recover it. 00:35:02.100 [2024-05-15 15:53:15.006628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.006776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.006803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.100 qpair failed and we were unable to recover it. 00:35:02.100 [2024-05-15 15:53:15.006971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.007112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.007138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.100 qpair failed and we were unable to recover it. 00:35:02.100 [2024-05-15 15:53:15.007263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.007426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.007471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.100 qpair failed and we were unable to recover it. 00:35:02.100 [2024-05-15 15:53:15.007711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.007865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.007892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.100 qpair failed and we were unable to recover it. 00:35:02.100 [2024-05-15 15:53:15.008031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.008169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.008196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.100 qpair failed and we were unable to recover it. 00:35:02.100 [2024-05-15 15:53:15.008376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.008518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.008545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.100 qpair failed and we were unable to recover it. 00:35:02.100 [2024-05-15 15:53:15.008672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.008830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.008856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.100 qpair failed and we were unable to recover it. 00:35:02.100 [2024-05-15 15:53:15.009003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.009148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.009175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.100 qpair failed and we were unable to recover it. 00:35:02.100 [2024-05-15 15:53:15.009324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.009466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.009492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.100 qpair failed and we were unable to recover it. 00:35:02.100 [2024-05-15 15:53:15.009654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.009808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.009853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.100 qpair failed and we were unable to recover it. 00:35:02.100 [2024-05-15 15:53:15.009980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.010100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.010126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.100 qpair failed and we were unable to recover it. 00:35:02.100 [2024-05-15 15:53:15.010239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.010434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.010478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.100 qpair failed and we were unable to recover it. 00:35:02.100 [2024-05-15 15:53:15.010609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.010787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.010830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.100 qpair failed and we were unable to recover it. 00:35:02.100 [2024-05-15 15:53:15.010955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.011071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.011097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.100 qpair failed and we were unable to recover it. 00:35:02.100 [2024-05-15 15:53:15.011212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.011376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.011406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.100 qpair failed and we were unable to recover it. 00:35:02.100 [2024-05-15 15:53:15.011586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.100 [2024-05-15 15:53:15.011852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.011903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.012027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.012170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.012197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.012375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.012529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.012573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.012768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.012902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.012928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.013067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.013233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.013263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.013426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.013608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.013652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.013815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.013977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.014004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.014123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.014287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.014317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.014480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.014631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.014657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.014774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.014937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.014964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.015091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.015232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.015261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.015419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.015608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.015697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.015842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.015984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.016011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.016130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.016259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.016290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.016463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.016642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.016686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.016824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.016944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.016972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.017138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.017267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.017298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.017507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.017787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.017831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.017970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.018114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.018141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.018252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.018438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.018482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.018644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.018807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.018834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.018974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.019125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.019152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.019294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.019446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.019489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.019633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.019790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.019817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.019977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.020113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.020139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.020304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.020486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.020530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.020642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.020804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.101 [2024-05-15 15:53:15.020847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.101 qpair failed and we were unable to recover it. 00:35:02.101 [2024-05-15 15:53:15.020983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.021093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.021120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.021264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.021413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.021440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.021560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.021698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.021725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.021866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.022032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.022058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.022171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.022387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.022415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.022612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.022748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.022786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.022957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.023070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.023096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.023249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.023414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.023469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.023611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.023761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.023789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.023929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.024095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.024122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.024263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.024433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.024478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.024649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.024786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.024814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.024957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.025122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.025149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.025336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.025516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.025559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.025722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.025903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.025930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.026042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.026184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.026211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.026357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.026535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.026585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.026838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.026979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.027007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.027153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.027326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.027373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.027547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.027749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.027793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.027959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.028100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.028127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.028233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.028390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.028434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.028594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.028798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.028842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.028961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.029105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.029132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.029292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.029478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.029521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.029690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.029859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.029886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.030027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.030133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.030159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.030323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.030503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.102 [2024-05-15 15:53:15.030547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.102 qpair failed and we were unable to recover it. 00:35:02.102 [2024-05-15 15:53:15.030706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.030889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.030916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.031055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.031196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.031230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.031397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.031606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.031650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.031814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.031997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.032023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.032187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.032333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.032382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.032516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.032661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.032710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.032868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.033030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.033056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.033228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.033424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.033469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.033618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.033778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.033821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.033980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.034129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.034155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.034351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.034480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.034508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.034664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.034837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.034881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.035017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.035160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.035187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.035363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.035542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.035586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.035740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.035877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.035904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.036022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.036141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.036168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.036325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.036477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.036507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.036677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.036855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.036881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.037013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.037151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.037178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.037359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.037643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.037698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.037858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.038028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.038055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.038197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.038367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.038413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.038586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.038741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.038784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.038948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.039119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.039146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.039308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.039484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.039530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.039716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.039937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.039964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.040082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.040227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.040254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.103 qpair failed and we were unable to recover it. 00:35:02.103 [2024-05-15 15:53:15.040413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.040582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.103 [2024-05-15 15:53:15.040626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.040789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.040917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.040943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.041079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.041191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.041223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.041368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.041539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.041565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.041718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.041877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.041904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.042014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.042125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.042151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.042308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.042520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.042564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.042711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.042889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.042916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.043049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.043195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.043236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.043380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.043555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.043599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.043750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.043894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.043920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.044057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.044174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.044200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.044375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.044592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.044635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.044777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.044931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.044957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.045073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.045196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.045229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.045371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.045539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.045565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.045728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.045885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.045911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.046049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.046188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.046244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.046400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.046609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.046655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.046821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.046975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.047002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.047165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.047320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.047350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.047479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.047613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.047644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.047833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.047979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.048005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.048142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.048300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.048330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.048514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.048693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.048736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.048869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.048991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.049017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.049156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.049291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.049337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.049504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.049638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.104 [2024-05-15 15:53:15.049666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.104 qpair failed and we were unable to recover it. 00:35:02.104 [2024-05-15 15:53:15.049833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.049968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.049998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.105 qpair failed and we were unable to recover it. 00:35:02.105 [2024-05-15 15:53:15.050138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.050297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.050344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.105 qpair failed and we were unable to recover it. 00:35:02.105 [2024-05-15 15:53:15.050542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.050718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.050762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.105 qpair failed and we were unable to recover it. 00:35:02.105 [2024-05-15 15:53:15.050909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.051028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.051066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.105 qpair failed and we were unable to recover it. 00:35:02.105 [2024-05-15 15:53:15.051213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.051405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.051451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.105 qpair failed and we were unable to recover it. 00:35:02.105 [2024-05-15 15:53:15.051620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.051765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.051811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.105 qpair failed and we were unable to recover it. 00:35:02.105 [2024-05-15 15:53:15.051980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.052155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.052182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.105 qpair failed and we were unable to recover it. 00:35:02.105 [2024-05-15 15:53:15.052379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.052587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.052666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.105 qpair failed and we were unable to recover it. 00:35:02.105 [2024-05-15 15:53:15.052837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.052974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.053001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.105 qpair failed and we were unable to recover it. 00:35:02.105 [2024-05-15 15:53:15.053141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.053303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.053334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.105 qpair failed and we were unable to recover it. 00:35:02.105 [2024-05-15 15:53:15.053476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.053641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.053673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.105 qpair failed and we were unable to recover it. 00:35:02.105 [2024-05-15 15:53:15.053816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.053984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.054011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.105 qpair failed and we were unable to recover it. 00:35:02.105 [2024-05-15 15:53:15.054191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.054352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.054380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.105 qpair failed and we were unable to recover it. 00:35:02.105 [2024-05-15 15:53:15.054523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.054660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.054697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.105 qpair failed and we were unable to recover it. 00:35:02.105 [2024-05-15 15:53:15.054864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.055000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.055027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.105 qpair failed and we were unable to recover it. 00:35:02.105 [2024-05-15 15:53:15.055159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.055298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.055352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.105 qpair failed and we were unable to recover it. 00:35:02.105 [2024-05-15 15:53:15.055543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.055714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.055757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.105 qpair failed and we were unable to recover it. 00:35:02.105 [2024-05-15 15:53:15.055921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.056090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.056117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.105 qpair failed and we were unable to recover it. 00:35:02.105 [2024-05-15 15:53:15.056282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.056468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.056512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.105 qpair failed and we were unable to recover it. 00:35:02.105 [2024-05-15 15:53:15.056704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.056871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.056898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.105 qpair failed and we were unable to recover it. 00:35:02.105 [2024-05-15 15:53:15.057005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.057149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.057176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.105 qpair failed and we were unable to recover it. 00:35:02.105 [2024-05-15 15:53:15.057343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.057470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.057500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.105 qpair failed and we were unable to recover it. 00:35:02.105 [2024-05-15 15:53:15.057699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.105 [2024-05-15 15:53:15.057866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.057892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.058011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.058175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.058201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.058378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.058538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.058583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.058745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.058931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.058957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.059100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.059245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.059273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.059413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.059613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.059657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.059776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.059890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.059916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.060063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.060193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.060227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.060362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.060537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.060580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.060728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.060867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.060894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.061015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.061125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.061152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.061306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.061486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.061533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.061813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.061993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.062019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.062157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.062325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.062371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.062528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.062707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.062780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.062990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.063143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.063169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.063348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.063554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.063608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.063770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.063931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.063957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.064068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.064210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.064244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.064427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.064645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.064697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.064861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.065022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.065048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.065211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.065373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.065402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.065607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.065929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.065980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.066126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.066274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.066301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.066459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.066588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.066616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.066772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.066933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.066959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.067101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.067280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.067310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.067495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.067670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.067699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.067911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.068048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.068074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.106 [2024-05-15 15:53:15.068225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.068357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.106 [2024-05-15 15:53:15.068387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.106 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.068541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.068718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.068762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.068905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.069021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.069047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.069188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.069328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.069356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.069501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.069651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.069695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.069842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.069950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.069977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.070117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.070237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.070265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.070404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.070550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.070577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.070690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.070840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.070866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.071011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.071121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.071148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.071286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.071441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.071489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.071646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.071829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.071855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.071974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.072114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.072140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.072303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.072430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.072457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.072581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.072743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.072770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.072909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.073074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.073100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.073256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.073420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.073447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.073639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.073826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.073868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.074030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.074160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.074187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.074370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.074554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.074609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.074755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.074893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.074919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.075036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.075172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.075200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.075365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.075539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.075566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.075756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.075924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.075951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.076063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.076197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.076233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.076344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.076480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.076507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.076655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.076795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.076822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.076988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.077108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.077136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.077307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.077489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.077532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.077689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.077881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.107 [2024-05-15 15:53:15.077908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.107 qpair failed and we were unable to recover it. 00:35:02.107 [2024-05-15 15:53:15.078056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.078236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.078263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.078403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.078585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.078629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.078780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.078962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.079006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.079165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.079320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.079370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.079507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.079657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.079687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.079879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.080061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.080088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.080227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.080389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.080432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.080557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.080717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.080743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.080859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.081000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.081028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.081174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.081349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.081394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.081588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.081832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.081890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.082054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.082235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.082262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.082411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.082586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.082613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.082792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.083014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.083059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.083197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.083367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.083414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.083579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.083757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.083800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.083945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.084055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.084083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.084229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.084382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.084412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.084588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.084770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.084796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.084915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.085052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.085079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.085195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.085389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.085435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.085643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.085824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.085851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.085994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.086133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.086163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.086334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.086539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.086583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.086749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.086907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.086935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.087057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.087171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.087198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.087367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.087562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.087606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.087767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.087926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.087953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.108 [2024-05-15 15:53:15.088117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.088276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.108 [2024-05-15 15:53:15.088306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.108 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.088493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.088661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.088687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.088823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.088940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.088967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.089078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.089221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.089254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.089371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.089488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.089515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.089683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.089851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.089878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.089994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.090110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.090137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.090318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.090504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.090531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.090668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.090827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.090855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.090997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.091167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.091193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.091365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.091589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.091643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.091809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.091977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.092009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.092137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.092301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.092347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.092478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.092753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.092809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.092944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.093109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.093136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.093292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.093472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.093500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.093701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.093840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.093868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.094011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.094123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.094149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.094288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.094486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.094531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.094694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.094838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.094865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.095000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.095139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.095166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.095330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.095513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.095555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.095684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.095840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.095873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.095992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.096123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.096150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.096293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.096463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.096510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.096725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.096879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.096916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.109 qpair failed and we were unable to recover it. 00:35:02.109 [2024-05-15 15:53:15.097058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.097164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.109 [2024-05-15 15:53:15.097190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.097374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.097524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.097569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.097752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.097917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.097944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.098092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.098248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.098276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.098409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.098665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.098724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.098892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.099025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.099052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.099191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.099349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.099399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.099535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.099712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.099757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.099918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.100050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.100076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.100214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.100362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.100393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.100573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.100724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.100766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.100877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.101009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.101036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.101155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.101329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.101375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.101570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.101728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.101771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.101882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.102020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.102046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.102180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.102340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.102385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.102585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.102785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.102835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.102978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.103110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.103136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.103294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.103455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.103504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.103670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.103805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.103831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.103999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.104135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.104161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.104321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.104505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.104548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.104707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.104883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.104912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.105064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.105202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.105242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.105414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.105594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.105636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.105765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.105894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.105921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.106085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.106243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.110 [2024-05-15 15:53:15.106275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.110 qpair failed and we were unable to recover it. 00:35:02.110 [2024-05-15 15:53:15.106415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.106655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.106714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.106901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.107062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.107089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.107240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.107365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.107409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.107599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.107870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.107925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.108072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.108252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.108297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.108499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.108709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.108765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.108920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.109060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.109087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.109232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.109419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.109449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.109597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.109778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.109805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.109924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.110042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.110068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.110236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.110418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.110461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.110650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.110779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.110806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.110971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.111137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.111163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.111331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.111504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.111547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.111705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.111885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.111912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.112052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.112192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.112225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.112366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.112572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.112625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.112761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.112924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.112951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.113059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.113196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.113232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.113394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.113594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.113638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.113804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.113951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.113977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.114092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.114239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.114267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.114429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.114606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.114649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.114815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.114974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.115001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.115132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.115326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.115370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.111 [2024-05-15 15:53:15.115519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.115694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.111 [2024-05-15 15:53:15.115761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.111 qpair failed and we were unable to recover it. 00:35:02.112 [2024-05-15 15:53:15.115872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.116006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.116032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.112 qpair failed and we were unable to recover it. 00:35:02.112 [2024-05-15 15:53:15.116157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.116281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.116326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.112 qpair failed and we were unable to recover it. 00:35:02.112 [2024-05-15 15:53:15.116513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.116755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.116810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.112 qpair failed and we were unable to recover it. 00:35:02.112 [2024-05-15 15:53:15.116947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.117114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.117141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.112 qpair failed and we were unable to recover it. 00:35:02.112 [2024-05-15 15:53:15.117316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.117500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.117543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.112 qpair failed and we were unable to recover it. 00:35:02.112 [2024-05-15 15:53:15.117729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.117933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.117983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.112 qpair failed and we were unable to recover it. 00:35:02.112 [2024-05-15 15:53:15.118146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.118319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.118364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.112 qpair failed and we were unable to recover it. 00:35:02.112 [2024-05-15 15:53:15.118526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.118699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.118747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.112 qpair failed and we were unable to recover it. 00:35:02.112 [2024-05-15 15:53:15.118903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.119033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.119059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.112 qpair failed and we were unable to recover it. 00:35:02.112 [2024-05-15 15:53:15.119227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.119361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.119404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.112 qpair failed and we were unable to recover it. 00:35:02.112 [2024-05-15 15:53:15.119559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.119762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.119806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.112 qpair failed and we were unable to recover it. 00:35:02.112 [2024-05-15 15:53:15.119994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.120133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.120165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.112 qpair failed and we were unable to recover it. 00:35:02.112 [2024-05-15 15:53:15.120358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.120555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.120621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.112 qpair failed and we were unable to recover it. 00:35:02.112 [2024-05-15 15:53:15.120811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.120990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.121020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.112 qpair failed and we were unable to recover it. 00:35:02.112 [2024-05-15 15:53:15.121158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.121327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.121373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.112 qpair failed and we were unable to recover it. 00:35:02.112 [2024-05-15 15:53:15.121537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.121746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.121789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.112 qpair failed and we were unable to recover it. 00:35:02.112 [2024-05-15 15:53:15.121994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.122104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.122131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.112 qpair failed and we were unable to recover it. 00:35:02.112 [2024-05-15 15:53:15.122256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.122444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.112 [2024-05-15 15:53:15.122488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.113 qpair failed and we were unable to recover it. 00:35:02.113 [2024-05-15 15:53:15.122659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.122819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.122845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.113 qpair failed and we were unable to recover it. 00:35:02.113 [2024-05-15 15:53:15.123010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.123132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.123168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.113 qpair failed and we were unable to recover it. 00:35:02.113 [2024-05-15 15:53:15.123337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.123514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.123556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.113 qpair failed and we were unable to recover it. 00:35:02.113 [2024-05-15 15:53:15.123685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.123851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.123878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.113 qpair failed and we were unable to recover it. 00:35:02.113 [2024-05-15 15:53:15.124004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.124172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.124199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.113 qpair failed and we were unable to recover it. 00:35:02.113 [2024-05-15 15:53:15.124348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.124526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.124571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.113 qpair failed and we were unable to recover it. 00:35:02.113 [2024-05-15 15:53:15.124740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.124875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.124902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.113 qpair failed and we were unable to recover it. 00:35:02.113 [2024-05-15 15:53:15.125066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.125225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.125280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.113 qpair failed and we were unable to recover it. 00:35:02.113 [2024-05-15 15:53:15.125457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.125704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.125732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.113 qpair failed and we were unable to recover it. 00:35:02.113 [2024-05-15 15:53:15.125875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.126020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.126047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.113 qpair failed and we were unable to recover it. 00:35:02.113 [2024-05-15 15:53:15.126157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.126338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.126385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.113 qpair failed and we were unable to recover it. 00:35:02.113 [2024-05-15 15:53:15.126540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.126743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.126786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.113 qpair failed and we were unable to recover it. 00:35:02.113 [2024-05-15 15:53:15.126952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.127083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.127109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.113 qpair failed and we were unable to recover it. 00:35:02.113 [2024-05-15 15:53:15.127274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.127432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.127483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.113 qpair failed and we were unable to recover it. 00:35:02.113 [2024-05-15 15:53:15.127643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.127819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.127848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.113 qpair failed and we were unable to recover it. 00:35:02.113 [2024-05-15 15:53:15.128006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.128173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.128200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.113 qpair failed and we were unable to recover it. 00:35:02.113 [2024-05-15 15:53:15.128358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.128544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.128589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.113 qpair failed and we were unable to recover it. 00:35:02.113 [2024-05-15 15:53:15.128728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.128871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.128897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.113 qpair failed and we were unable to recover it. 00:35:02.113 [2024-05-15 15:53:15.129043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.129155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.129179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.113 qpair failed and we were unable to recover it. 00:35:02.113 [2024-05-15 15:53:15.129376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.129593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.129634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.113 qpair failed and we were unable to recover it. 00:35:02.113 [2024-05-15 15:53:15.129797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.129974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.113 [2024-05-15 15:53:15.129998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.114 qpair failed and we were unable to recover it. 00:35:02.114 [2024-05-15 15:53:15.130140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.130272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.130300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.114 qpair failed and we were unable to recover it. 00:35:02.114 [2024-05-15 15:53:15.130452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.130646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.130671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.114 qpair failed and we were unable to recover it. 00:35:02.114 [2024-05-15 15:53:15.130861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.131010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.131034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.114 qpair failed and we were unable to recover it. 00:35:02.114 [2024-05-15 15:53:15.131172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.131349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.131392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.114 qpair failed and we were unable to recover it. 00:35:02.114 [2024-05-15 15:53:15.131546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.131697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.131725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.114 qpair failed and we were unable to recover it. 00:35:02.114 [2024-05-15 15:53:15.131907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.132047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.132072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.114 qpair failed and we were unable to recover it. 00:35:02.114 [2024-05-15 15:53:15.132180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.132357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.132402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.114 qpair failed and we were unable to recover it. 00:35:02.114 [2024-05-15 15:53:15.132538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.132740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.132783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.114 qpair failed and we were unable to recover it. 00:35:02.114 [2024-05-15 15:53:15.132923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.133059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.133084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.114 qpair failed and we were unable to recover it. 00:35:02.114 [2024-05-15 15:53:15.133251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.133387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.133428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.114 qpair failed and we were unable to recover it. 00:35:02.114 [2024-05-15 15:53:15.133609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.133821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.133875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.114 qpair failed and we were unable to recover it. 00:35:02.114 [2024-05-15 15:53:15.134020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.134130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.134156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.114 qpair failed and we were unable to recover it. 00:35:02.114 [2024-05-15 15:53:15.134285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.134432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.134474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.114 qpair failed and we were unable to recover it. 00:35:02.114 [2024-05-15 15:53:15.134632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.134790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.134815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.114 qpair failed and we were unable to recover it. 00:35:02.114 [2024-05-15 15:53:15.134924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.135083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.135108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.114 qpair failed and we were unable to recover it. 00:35:02.114 [2024-05-15 15:53:15.135271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.135459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.114 [2024-05-15 15:53:15.135485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.114 qpair failed and we were unable to recover it. 00:35:02.115 [2024-05-15 15:53:15.135623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.135809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.135836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.115 qpair failed and we were unable to recover it. 00:35:02.115 [2024-05-15 15:53:15.136012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.136135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.136161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.115 qpair failed and we were unable to recover it. 00:35:02.115 [2024-05-15 15:53:15.136306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.136486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.136530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.115 qpair failed and we were unable to recover it. 00:35:02.115 [2024-05-15 15:53:15.136689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.136816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.136842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.115 qpair failed and we were unable to recover it. 00:35:02.115 [2024-05-15 15:53:15.136956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.137093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.137119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.115 qpair failed and we were unable to recover it. 00:35:02.115 [2024-05-15 15:53:15.137301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.137483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.137527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.115 qpair failed and we were unable to recover it. 00:35:02.115 [2024-05-15 15:53:15.137679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.137863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.137890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.115 qpair failed and we were unable to recover it. 00:35:02.115 [2024-05-15 15:53:15.138033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.138172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.138199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.115 qpair failed and we were unable to recover it. 00:35:02.115 [2024-05-15 15:53:15.138390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.138563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.138606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.115 qpair failed and we were unable to recover it. 00:35:02.115 [2024-05-15 15:53:15.138739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.138906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.138934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.115 qpair failed and we were unable to recover it. 00:35:02.115 [2024-05-15 15:53:15.139098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.139214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.139248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.115 qpair failed and we were unable to recover it. 00:35:02.115 [2024-05-15 15:53:15.139389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.139527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.139555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.115 qpair failed and we were unable to recover it. 00:35:02.115 [2024-05-15 15:53:15.139687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.139847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.139873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.115 qpair failed and we were unable to recover it. 00:35:02.115 [2024-05-15 15:53:15.140007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.140118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.140145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.115 qpair failed and we were unable to recover it. 00:35:02.115 [2024-05-15 15:53:15.140298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.140510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.140554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.115 qpair failed and we were unable to recover it. 00:35:02.115 [2024-05-15 15:53:15.140663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.140810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.140836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.115 qpair failed and we were unable to recover it. 00:35:02.115 [2024-05-15 15:53:15.140976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.141096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.141122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.115 qpair failed and we were unable to recover it. 00:35:02.115 [2024-05-15 15:53:15.141305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.141470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.141514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.115 qpair failed and we were unable to recover it. 00:35:02.115 [2024-05-15 15:53:15.141658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.141819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.141845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.115 qpair failed and we were unable to recover it. 00:35:02.115 [2024-05-15 15:53:15.141962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.142102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.115 [2024-05-15 15:53:15.142128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.142276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.142415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.142441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.142582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.142692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.142716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.142860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.142992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.143018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.143157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.143324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.143368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.143537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.143713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.143780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.143925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.144066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.144092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.144235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.144362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.144391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.144562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.144752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.144808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.144977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.145111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.145137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.145295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.145485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.145529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.145684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.145815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.145843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.145980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.146141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.146167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.146329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.146550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.146611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.146766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.146945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.146972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.147132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.147298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.147344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.147514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.147736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.147781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.147897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.148063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.148090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.148276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.148457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.148502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.148700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.148826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.148854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.148995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.149177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.149204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.149409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.149574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.149618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.149807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.149965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.149991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.150154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.150292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.150337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.150476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.150798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.150856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.151031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.151151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.151177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.151339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.151500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.151543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.151728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.151883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.151928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.152077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.152273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.152323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.152508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.152742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.116 [2024-05-15 15:53:15.152803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.116 qpair failed and we were unable to recover it. 00:35:02.116 [2024-05-15 15:53:15.152917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.153054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.153084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.153207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.153413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.153458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.153652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.153804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.153848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.153994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.154159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.154184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.154364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.154521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.154547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.154711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.154836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.154862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.154998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.155116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.155142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.155275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.155421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.155448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.155589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.155693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.155720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.155865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.156008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.156035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.156164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.156322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.156357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.156551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.156748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.156794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.156928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.157064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.157091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.157235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.157382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.157432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.157626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.157781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.157826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.157974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.158113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.158139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.158301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.158483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.158531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.158668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.158805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.158831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.158968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.159113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.159139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.159304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.159476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.159524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.159714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.159872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.159903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.160018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.160128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.160155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.160316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.160500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.160544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.160696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.160855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.160882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.161023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.161161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.161187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.161347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.161515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.161542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.161730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.161870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.161898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.162061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.162182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.162210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.162398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.162577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.162621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.162810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.162971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.162998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.117 [2024-05-15 15:53:15.163165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.163327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.117 [2024-05-15 15:53:15.163390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.117 qpair failed and we were unable to recover it. 00:35:02.392 [2024-05-15 15:53:15.163576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.392 [2024-05-15 15:53:15.163747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.392 [2024-05-15 15:53:15.163791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.392 qpair failed and we were unable to recover it. 00:35:02.392 [2024-05-15 15:53:15.163948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.392 [2024-05-15 15:53:15.164074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.392 [2024-05-15 15:53:15.164101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.392 qpair failed and we were unable to recover it. 00:35:02.392 [2024-05-15 15:53:15.164230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.392 [2024-05-15 15:53:15.164383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.392 [2024-05-15 15:53:15.164427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.392 qpair failed and we were unable to recover it. 00:35:02.392 [2024-05-15 15:53:15.164611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.392 [2024-05-15 15:53:15.164766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.392 [2024-05-15 15:53:15.164809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.392 qpair failed and we were unable to recover it. 00:35:02.392 [2024-05-15 15:53:15.164972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.392 [2024-05-15 15:53:15.165140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.392 [2024-05-15 15:53:15.165167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.392 qpair failed and we were unable to recover it. 00:35:02.392 [2024-05-15 15:53:15.165296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.392 [2024-05-15 15:53:15.165473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.392 [2024-05-15 15:53:15.165517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.392 qpair failed and we were unable to recover it. 00:35:02.392 [2024-05-15 15:53:15.165650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.392 [2024-05-15 15:53:15.165779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.392 [2024-05-15 15:53:15.165806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.392 qpair failed and we were unable to recover it. 00:35:02.393 [2024-05-15 15:53:15.165917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.166067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.166095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.393 qpair failed and we were unable to recover it. 00:35:02.393 [2024-05-15 15:53:15.166241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.166383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.166426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.393 qpair failed and we were unable to recover it. 00:35:02.393 [2024-05-15 15:53:15.166600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.166781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.166807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.393 qpair failed and we were unable to recover it. 00:35:02.393 [2024-05-15 15:53:15.166933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.167073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.167099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.393 qpair failed and we were unable to recover it. 00:35:02.393 [2024-05-15 15:53:15.167238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.167405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.167449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.393 qpair failed and we were unable to recover it. 00:35:02.393 [2024-05-15 15:53:15.167640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.167822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.167849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.393 qpair failed and we were unable to recover it. 00:35:02.393 [2024-05-15 15:53:15.167992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.168126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.168152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.393 qpair failed and we were unable to recover it. 00:35:02.393 [2024-05-15 15:53:15.168310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.168481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.168527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.393 qpair failed and we were unable to recover it. 00:35:02.393 [2024-05-15 15:53:15.168699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.168887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.168913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.393 qpair failed and we were unable to recover it. 00:35:02.393 [2024-05-15 15:53:15.169062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.169231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.169261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.393 qpair failed and we were unable to recover it. 00:35:02.393 [2024-05-15 15:53:15.169445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.169663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.169727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.393 qpair failed and we were unable to recover it. 00:35:02.393 [2024-05-15 15:53:15.169860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.169992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.170024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.393 qpair failed and we were unable to recover it. 00:35:02.393 [2024-05-15 15:53:15.170168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.170329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.170375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.393 qpair failed and we were unable to recover it. 00:35:02.393 [2024-05-15 15:53:15.170531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.170796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.170849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.393 qpair failed and we were unable to recover it. 00:35:02.393 [2024-05-15 15:53:15.170967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.171140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.171167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.393 qpair failed and we were unable to recover it. 00:35:02.393 [2024-05-15 15:53:15.171324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.171474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.171519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.393 qpair failed and we were unable to recover it. 00:35:02.393 [2024-05-15 15:53:15.171779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.171925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.171951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.393 qpair failed and we were unable to recover it. 00:35:02.393 [2024-05-15 15:53:15.172093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.172275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.172321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.393 qpair failed and we were unable to recover it. 00:35:02.393 [2024-05-15 15:53:15.172488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.172647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.393 [2024-05-15 15:53:15.172689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.393 qpair failed and we were unable to recover it. 00:35:02.393 [2024-05-15 15:53:15.172825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.173016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.173042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-05-15 15:53:15.173183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.173351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.173382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-05-15 15:53:15.173552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.173730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.173756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-05-15 15:53:15.173896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.174042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.174069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-05-15 15:53:15.174196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.174360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.174387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-05-15 15:53:15.174503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.174639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.174666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-05-15 15:53:15.174807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.174943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.174969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-05-15 15:53:15.175112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.175249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.175276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-05-15 15:53:15.175458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.175626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.175670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-05-15 15:53:15.175781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.175900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.175926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-05-15 15:53:15.176087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.176202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.176236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-05-15 15:53:15.176383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.176536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.176566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-05-15 15:53:15.176691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.176823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.176849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-05-15 15:53:15.176988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.177128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.177154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-05-15 15:53:15.177322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.177495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.177523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-05-15 15:53:15.177676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.177831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.177858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-05-15 15:53:15.178023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.178160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.178186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-05-15 15:53:15.178328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.178502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.178546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-05-15 15:53:15.178733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.178891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.178917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-05-15 15:53:15.179051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.179212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.394 [2024-05-15 15:53:15.179246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-05-15 15:53:15.179407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.179556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.179599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-05-15 15:53:15.179715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.179888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.179934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-05-15 15:53:15.180077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.180258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.180288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-05-15 15:53:15.180443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.180615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.180660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-05-15 15:53:15.180802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.180981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.181009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-05-15 15:53:15.181148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.181318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.181364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-05-15 15:53:15.181529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.181801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.181857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-05-15 15:53:15.181979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.182126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.182153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-05-15 15:53:15.182286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.182467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.182512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-05-15 15:53:15.182801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.182959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.182986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-05-15 15:53:15.183149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.183313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.183357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-05-15 15:53:15.183554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.183771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.183832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-05-15 15:53:15.183950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.184112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.184139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-05-15 15:53:15.184294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.184477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.184506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-05-15 15:53:15.184664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.184824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.184852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-05-15 15:53:15.184987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.185125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.185155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-05-15 15:53:15.185311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.185441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.185469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-05-15 15:53:15.185656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.185806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.185850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-05-15 15:53:15.185964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.186129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.395 [2024-05-15 15:53:15.186156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-05-15 15:53:15.186340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.186521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.186565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-05-15 15:53:15.186730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.186864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.186890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-05-15 15:53:15.187030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.187174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.187201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-05-15 15:53:15.187370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.187563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.187590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-05-15 15:53:15.187787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.187971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.187997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-05-15 15:53:15.188164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.188353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.188397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-05-15 15:53:15.188600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.188826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.188876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-05-15 15:53:15.189030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.189196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.189234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-05-15 15:53:15.189376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.189562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.189605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-05-15 15:53:15.189761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.189916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.189942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-05-15 15:53:15.190109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.190247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.190275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-05-15 15:53:15.190469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.190647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.190689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-05-15 15:53:15.190826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.190961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.190987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-05-15 15:53:15.191152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.191314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.191362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-05-15 15:53:15.191577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.191846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.191901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-05-15 15:53:15.192049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.192164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.192192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-05-15 15:53:15.192362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.192573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.192617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-05-15 15:53:15.192751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.192907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.192935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-05-15 15:53:15.193076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.193261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.396 [2024-05-15 15:53:15.193291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.397 [2024-05-15 15:53:15.193472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.193651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.193694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.397 qpair failed and we were unable to recover it. 00:35:02.397 [2024-05-15 15:53:15.193815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.193967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.193995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.397 qpair failed and we were unable to recover it. 00:35:02.397 [2024-05-15 15:53:15.194131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.194267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.194294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.397 qpair failed and we were unable to recover it. 00:35:02.397 [2024-05-15 15:53:15.194438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.194604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.194631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.397 qpair failed and we were unable to recover it. 00:35:02.397 [2024-05-15 15:53:15.194772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.194915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.194944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.397 qpair failed and we were unable to recover it. 00:35:02.397 [2024-05-15 15:53:15.195090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.195234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.195262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.397 qpair failed and we were unable to recover it. 00:35:02.397 [2024-05-15 15:53:15.195403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.195581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.195625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.397 qpair failed and we were unable to recover it. 00:35:02.397 [2024-05-15 15:53:15.195746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.195863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.195890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.397 qpair failed and we were unable to recover it. 00:35:02.397 [2024-05-15 15:53:15.196053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.196191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.196223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.397 qpair failed and we were unable to recover it. 00:35:02.397 [2024-05-15 15:53:15.196389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.196539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.196582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.397 qpair failed and we were unable to recover it. 00:35:02.397 [2024-05-15 15:53:15.196689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.196821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.196847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.397 qpair failed and we were unable to recover it. 00:35:02.397 [2024-05-15 15:53:15.196991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.197158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.197184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.397 qpair failed and we were unable to recover it. 00:35:02.397 [2024-05-15 15:53:15.197328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.197491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.197536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.397 qpair failed and we were unable to recover it. 00:35:02.397 [2024-05-15 15:53:15.197699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.197971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.198024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.397 qpair failed and we were unable to recover it. 00:35:02.397 [2024-05-15 15:53:15.198166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.198356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.198386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.397 qpair failed and we were unable to recover it. 00:35:02.397 [2024-05-15 15:53:15.198573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.198866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.198921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.397 qpair failed and we were unable to recover it. 00:35:02.397 [2024-05-15 15:53:15.199062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.199246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.199290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.397 qpair failed and we were unable to recover it. 00:35:02.397 [2024-05-15 15:53:15.199452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.199630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.199674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.397 qpair failed and we were unable to recover it. 00:35:02.397 [2024-05-15 15:53:15.199872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.200014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.200040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.397 qpair failed and we were unable to recover it. 00:35:02.397 [2024-05-15 15:53:15.200166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.397 [2024-05-15 15:53:15.200329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.200374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.398 qpair failed and we were unable to recover it. 00:35:02.398 [2024-05-15 15:53:15.200544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.200717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.200761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.398 qpair failed and we were unable to recover it. 00:35:02.398 [2024-05-15 15:53:15.200901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.201015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.201041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.398 qpair failed and we were unable to recover it. 00:35:02.398 [2024-05-15 15:53:15.201150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.201284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.201336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.398 qpair failed and we were unable to recover it. 00:35:02.398 [2024-05-15 15:53:15.201501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.201643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.201687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.398 qpair failed and we were unable to recover it. 00:35:02.398 [2024-05-15 15:53:15.201828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.201993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.202019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.398 qpair failed and we were unable to recover it. 00:35:02.398 [2024-05-15 15:53:15.202158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.202318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.202363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.398 qpair failed and we were unable to recover it. 00:35:02.398 [2024-05-15 15:53:15.202498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.202700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.202744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.398 qpair failed and we were unable to recover it. 00:35:02.398 [2024-05-15 15:53:15.202884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.202993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.203019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.398 qpair failed and we were unable to recover it. 00:35:02.398 [2024-05-15 15:53:15.203151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.203343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.203388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.398 qpair failed and we were unable to recover it. 00:35:02.398 [2024-05-15 15:53:15.203521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.203721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.203749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.398 qpair failed and we were unable to recover it. 00:35:02.398 [2024-05-15 15:53:15.203893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.204027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.204054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.398 qpair failed and we were unable to recover it. 00:35:02.398 [2024-05-15 15:53:15.204197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.204338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.204384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.398 qpair failed and we were unable to recover it. 00:35:02.398 [2024-05-15 15:53:15.204522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.204670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.204715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.398 qpair failed and we were unable to recover it. 00:35:02.398 [2024-05-15 15:53:15.204834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.204982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.205010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.398 qpair failed and we were unable to recover it. 00:35:02.398 [2024-05-15 15:53:15.205123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.205240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.205273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.398 qpair failed and we were unable to recover it. 00:35:02.398 [2024-05-15 15:53:15.205391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.205528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.205555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.398 qpair failed and we were unable to recover it. 00:35:02.398 [2024-05-15 15:53:15.205725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.205850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.205877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.398 qpair failed and we were unable to recover it. 00:35:02.398 [2024-05-15 15:53:15.206025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.206167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.206195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.398 qpair failed and we were unable to recover it. 00:35:02.398 [2024-05-15 15:53:15.206346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.206523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.398 [2024-05-15 15:53:15.206553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.398 qpair failed and we were unable to recover it. 00:35:02.399 [2024-05-15 15:53:15.206756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.206938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.206964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.399 qpair failed and we were unable to recover it. 00:35:02.399 [2024-05-15 15:53:15.207066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.207183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.207209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.399 qpair failed and we were unable to recover it. 00:35:02.399 [2024-05-15 15:53:15.207357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.207530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.207576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.399 qpair failed and we were unable to recover it. 00:35:02.399 [2024-05-15 15:53:15.207736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.207930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.207957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.399 qpair failed and we were unable to recover it. 00:35:02.399 [2024-05-15 15:53:15.208068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.208201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.208252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.399 qpair failed and we were unable to recover it. 00:35:02.399 [2024-05-15 15:53:15.208422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.208596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.208640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.399 qpair failed and we were unable to recover it. 00:35:02.399 [2024-05-15 15:53:15.208770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.208923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.208949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.399 qpair failed and we were unable to recover it. 00:35:02.399 [2024-05-15 15:53:15.209054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.209224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.209261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.399 qpair failed and we were unable to recover it. 00:35:02.399 [2024-05-15 15:53:15.209385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.209513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.209557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.399 qpair failed and we were unable to recover it. 00:35:02.399 [2024-05-15 15:53:15.209737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.209959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.210008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.399 qpair failed and we were unable to recover it. 00:35:02.399 [2024-05-15 15:53:15.210145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.210313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.210360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.399 qpair failed and we were unable to recover it. 00:35:02.399 [2024-05-15 15:53:15.210508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.210681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.210724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.399 qpair failed and we were unable to recover it. 00:35:02.399 [2024-05-15 15:53:15.210839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.210999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.211025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.399 qpair failed and we were unable to recover it. 00:35:02.399 [2024-05-15 15:53:15.211164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.211327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.211371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.399 qpair failed and we were unable to recover it. 00:35:02.399 [2024-05-15 15:53:15.211502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.211686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.399 [2024-05-15 15:53:15.211731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.400 qpair failed and we were unable to recover it. 00:35:02.400 [2024-05-15 15:53:15.211897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.212037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.212064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.400 qpair failed and we were unable to recover it. 00:35:02.400 [2024-05-15 15:53:15.212307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.212469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.212513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.400 qpair failed and we were unable to recover it. 00:35:02.400 [2024-05-15 15:53:15.212678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.212832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.212863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.400 qpair failed and we were unable to recover it. 00:35:02.400 [2024-05-15 15:53:15.212983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.213116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.213143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.400 qpair failed and we were unable to recover it. 00:35:02.400 [2024-05-15 15:53:15.213295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.213467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.213514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.400 qpair failed and we were unable to recover it. 00:35:02.400 [2024-05-15 15:53:15.213753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.213948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.213974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.400 qpair failed and we were unable to recover it. 00:35:02.400 [2024-05-15 15:53:15.214115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.214273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.214303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.400 qpair failed and we were unable to recover it. 00:35:02.400 [2024-05-15 15:53:15.214487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.214642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.214685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.400 qpair failed and we were unable to recover it. 00:35:02.400 [2024-05-15 15:53:15.214841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.215007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.215033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.400 qpair failed and we were unable to recover it. 00:35:02.400 [2024-05-15 15:53:15.215200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.215368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.215395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.400 qpair failed and we were unable to recover it. 00:35:02.400 [2024-05-15 15:53:15.215513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.215661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.215688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.400 qpair failed and we were unable to recover it. 00:35:02.400 [2024-05-15 15:53:15.215825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.215988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.216014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.400 qpair failed and we were unable to recover it. 00:35:02.400 [2024-05-15 15:53:15.216130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.216267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.216298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.400 qpair failed and we were unable to recover it. 00:35:02.400 [2024-05-15 15:53:15.216415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.216551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.216578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.400 qpair failed and we were unable to recover it. 00:35:02.400 [2024-05-15 15:53:15.216679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.216805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.216832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.400 qpair failed and we were unable to recover it. 00:35:02.400 [2024-05-15 15:53:15.216996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.217137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.217164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.400 qpair failed and we were unable to recover it. 00:35:02.400 [2024-05-15 15:53:15.217283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.217438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.217484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.400 qpair failed and we were unable to recover it. 00:35:02.400 [2024-05-15 15:53:15.217644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.217911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.217962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.400 qpair failed and we were unable to recover it. 00:35:02.400 [2024-05-15 15:53:15.218127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.218237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.218264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.400 qpair failed and we were unable to recover it. 00:35:02.400 [2024-05-15 15:53:15.218401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.218576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.218621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.400 qpair failed and we were unable to recover it. 00:35:02.400 [2024-05-15 15:53:15.218808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.218993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.219019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.400 qpair failed and we were unable to recover it. 00:35:02.400 [2024-05-15 15:53:15.219126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.219330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.219360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.400 qpair failed and we were unable to recover it. 00:35:02.400 [2024-05-15 15:53:15.219515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.219668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.400 [2024-05-15 15:53:15.219717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.219827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.219973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.220008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.220174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.220322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.220357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.220516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.220812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.220866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.220985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.221152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.221179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.221353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.221539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.221584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.221743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.221880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.221906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.222047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.222188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.222220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.222376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.222525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.222570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.222758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.222899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.222941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.223075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.223225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.223252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.223394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.223575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.223618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.223771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.223953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.223979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.224123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.224267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.224295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.224403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.224514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.224541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.224688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.224826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.224853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.224971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.225107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.225134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.225282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.225449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.225476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.225616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.225735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.225761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.225878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.226018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.226044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.226213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.226366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.226393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.226560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.226734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.226777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.226892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.227035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.227062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.227178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.227364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.227392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.227557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.227762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.227805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.227944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.228082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.228109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.228228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.228418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.228463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.401 [2024-05-15 15:53:15.228716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.228904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.401 [2024-05-15 15:53:15.228930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.401 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.229045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.229193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.229229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.229399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.229651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.229705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.229874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.230038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.230064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.230191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.230327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.230372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.230560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.230719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.230786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.230953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.231074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.231100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.231226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.231337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.231364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.231506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.231652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.231679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.231791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.231932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.231960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.232100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.232269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.232295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.232429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.232583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.232609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.232724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.232864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.232891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.233032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.233199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.233238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.233393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.233568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.233611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.233772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.233956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.233983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.234100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.234244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.234272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.234403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.234671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.234722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.234866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.234989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.235017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.235134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.235297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.235341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.235497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.235660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.235687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.235828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.235970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.235996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.236136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.236297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.236342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.236503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.236716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.236771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.236914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.237056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.237083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.237248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.237409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.237453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.237614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.237750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.237776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.237918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.238061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.238087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.238231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.238360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.238404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.238596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.238753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.402 [2024-05-15 15:53:15.238780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.402 qpair failed and we were unable to recover it. 00:35:02.402 [2024-05-15 15:53:15.238919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.239064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.239091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.239207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.239375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.239420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.239625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.239844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.239888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.240028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.240146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.240177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.240351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.240561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.240618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.240762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.240947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.240974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.241116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.241262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.241290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.241459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.241650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.241678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.241798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.241933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.241971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.242112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.242283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.242314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.242498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.242678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.242718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.242883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.243048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.243075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.243228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.243394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.243439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.243611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.243890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.243948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.244063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.244205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.244239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.244350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.244485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.244512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.244655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.244772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.244798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.244939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.245111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.245137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.245273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.245406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.245433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.245579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.245693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.245719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.245859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.245977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.246004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.246142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.246251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.246279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.246395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.246529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.246555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.246718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.246853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.246881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.247022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.247162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.247189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.247329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.247482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.247526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.247678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.247840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.247867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.248008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.248127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.248153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.248337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.248522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.248566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.403 qpair failed and we were unable to recover it. 00:35:02.403 [2024-05-15 15:53:15.248697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.248820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.403 [2024-05-15 15:53:15.248846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.404 qpair failed and we were unable to recover it. 00:35:02.404 [2024-05-15 15:53:15.248963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.249132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.249157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.404 qpair failed and we were unable to recover it. 00:35:02.404 [2024-05-15 15:53:15.249315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.249577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.249630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.404 qpair failed and we were unable to recover it. 00:35:02.404 [2024-05-15 15:53:15.249788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.249980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.250006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.404 qpair failed and we were unable to recover it. 00:35:02.404 [2024-05-15 15:53:15.250153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.250327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.250372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.404 qpair failed and we were unable to recover it. 00:35:02.404 [2024-05-15 15:53:15.250577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.250830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.250894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.404 qpair failed and we were unable to recover it. 00:35:02.404 [2024-05-15 15:53:15.251062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.251202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.251236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.404 qpair failed and we were unable to recover it. 00:35:02.404 [2024-05-15 15:53:15.251428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.251718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.251762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.404 qpair failed and we were unable to recover it. 00:35:02.404 [2024-05-15 15:53:15.251924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.252089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.252117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.404 qpair failed and we were unable to recover it. 00:35:02.404 [2024-05-15 15:53:15.252299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.252452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.252495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.404 qpair failed and we were unable to recover it. 00:35:02.404 [2024-05-15 15:53:15.252605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.252744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.252770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.404 qpair failed and we were unable to recover it. 00:35:02.404 [2024-05-15 15:53:15.252886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.253025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.253053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.404 qpair failed and we were unable to recover it. 00:35:02.404 [2024-05-15 15:53:15.253201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.253370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.253415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.404 qpair failed and we were unable to recover it. 00:35:02.404 [2024-05-15 15:53:15.253613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.253795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.253839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.404 qpair failed and we were unable to recover it. 00:35:02.404 [2024-05-15 15:53:15.254003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.254141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.254167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.404 qpair failed and we were unable to recover it. 00:35:02.404 [2024-05-15 15:53:15.254328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.254508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.254552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.404 qpair failed and we were unable to recover it. 00:35:02.404 [2024-05-15 15:53:15.254677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.254827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.254870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.404 qpair failed and we were unable to recover it. 00:35:02.404 [2024-05-15 15:53:15.255008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.255147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.255173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.404 qpair failed and we were unable to recover it. 00:35:02.404 [2024-05-15 15:53:15.255340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.255500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.404 [2024-05-15 15:53:15.255527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.404 qpair failed and we were unable to recover it. 00:35:02.405 [2024-05-15 15:53:15.255665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.255775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.255801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.405 qpair failed and we were unable to recover it. 00:35:02.405 [2024-05-15 15:53:15.255935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.256078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.256105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.405 qpair failed and we were unable to recover it. 00:35:02.405 [2024-05-15 15:53:15.256262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.256446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.256489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.405 qpair failed and we were unable to recover it. 00:35:02.405 [2024-05-15 15:53:15.256648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.256778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.256805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.405 qpair failed and we were unable to recover it. 00:35:02.405 [2024-05-15 15:53:15.256908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.257027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.257053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.405 qpair failed and we were unable to recover it. 00:35:02.405 [2024-05-15 15:53:15.257223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.257407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.257435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.405 qpair failed and we were unable to recover it. 00:35:02.405 [2024-05-15 15:53:15.257560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.257789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.257816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.405 qpair failed and we were unable to recover it. 00:35:02.405 [2024-05-15 15:53:15.257981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.258124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.258150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.405 qpair failed and we were unable to recover it. 00:35:02.405 [2024-05-15 15:53:15.258311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.258490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.258535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.405 qpair failed and we were unable to recover it. 00:35:02.405 [2024-05-15 15:53:15.258669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.258872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.258899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.405 qpair failed and we were unable to recover it. 00:35:02.405 [2024-05-15 15:53:15.259039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.259150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.259178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.405 qpair failed and we were unable to recover it. 00:35:02.405 [2024-05-15 15:53:15.259354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.259510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.259556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.405 qpair failed and we were unable to recover it. 00:35:02.405 [2024-05-15 15:53:15.259717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.259872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.259899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.405 qpair failed and we were unable to recover it. 00:35:02.405 [2024-05-15 15:53:15.260043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.260178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.260205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.405 qpair failed and we were unable to recover it. 00:35:02.405 [2024-05-15 15:53:15.260385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.260527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.260575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.405 qpair failed and we were unable to recover it. 00:35:02.405 [2024-05-15 15:53:15.260736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.260895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.260921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.405 qpair failed and we were unable to recover it. 00:35:02.405 [2024-05-15 15:53:15.261028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.261189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.261230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.405 qpair failed and we were unable to recover it. 00:35:02.405 [2024-05-15 15:53:15.261380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.261567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.261611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.405 qpair failed and we were unable to recover it. 00:35:02.405 [2024-05-15 15:53:15.261756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.261949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.261976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.405 qpair failed and we were unable to recover it. 00:35:02.405 [2024-05-15 15:53:15.262144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.405 [2024-05-15 15:53:15.262274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.262320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.406 qpair failed and we were unable to recover it. 00:35:02.406 [2024-05-15 15:53:15.262475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.262682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.262750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.406 qpair failed and we were unable to recover it. 00:35:02.406 [2024-05-15 15:53:15.262859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.263005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.263033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.406 qpair failed and we were unable to recover it. 00:35:02.406 [2024-05-15 15:53:15.263146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.263265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.263304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.406 qpair failed and we were unable to recover it. 00:35:02.406 [2024-05-15 15:53:15.263480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.263654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.263698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.406 qpair failed and we were unable to recover it. 00:35:02.406 [2024-05-15 15:53:15.263828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.263978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.264004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.406 qpair failed and we were unable to recover it. 00:35:02.406 [2024-05-15 15:53:15.264147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.264269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.264299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.406 qpair failed and we were unable to recover it. 00:35:02.406 [2024-05-15 15:53:15.264475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.264665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.264728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.406 qpair failed and we were unable to recover it. 00:35:02.406 [2024-05-15 15:53:15.264870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.265013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.265040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.406 qpair failed and we were unable to recover it. 00:35:02.406 [2024-05-15 15:53:15.265204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.265402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.265432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.406 qpair failed and we were unable to recover it. 00:35:02.406 [2024-05-15 15:53:15.265594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.265773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.265816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.406 qpair failed and we were unable to recover it. 00:35:02.406 [2024-05-15 15:53:15.265929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.266090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.266117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.406 qpair failed and we were unable to recover it. 00:35:02.406 [2024-05-15 15:53:15.266278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.266487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.266530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.406 qpair failed and we were unable to recover it. 00:35:02.406 [2024-05-15 15:53:15.266721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.266858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.266884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.406 qpair failed and we were unable to recover it. 00:35:02.406 [2024-05-15 15:53:15.267002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.267112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.267138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.406 qpair failed and we were unable to recover it. 00:35:02.406 [2024-05-15 15:53:15.267314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.267487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.267532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.406 qpair failed and we were unable to recover it. 00:35:02.406 [2024-05-15 15:53:15.267723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.267885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.267913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.406 qpair failed and we were unable to recover it. 00:35:02.406 [2024-05-15 15:53:15.268054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.268192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.268229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.406 qpair failed and we were unable to recover it. 00:35:02.406 [2024-05-15 15:53:15.268396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.268574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.268619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.406 qpair failed and we were unable to recover it. 00:35:02.406 [2024-05-15 15:53:15.268879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.269085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.406 [2024-05-15 15:53:15.269111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.406 qpair failed and we were unable to recover it. 00:35:02.407 [2024-05-15 15:53:15.269229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.269402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.269449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.407 qpair failed and we were unable to recover it. 00:35:02.407 [2024-05-15 15:53:15.269585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.269826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.269885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.407 qpair failed and we were unable to recover it. 00:35:02.407 [2024-05-15 15:53:15.270030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.270180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.270206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.407 qpair failed and we were unable to recover it. 00:35:02.407 [2024-05-15 15:53:15.270368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.270517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.270560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.407 qpair failed and we were unable to recover it. 00:35:02.407 [2024-05-15 15:53:15.270722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.270855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.270883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.407 qpair failed and we were unable to recover it. 00:35:02.407 [2024-05-15 15:53:15.271048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.271187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.271213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.407 qpair failed and we were unable to recover it. 00:35:02.407 [2024-05-15 15:53:15.271389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.271571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.271614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.407 qpair failed and we were unable to recover it. 00:35:02.407 [2024-05-15 15:53:15.271780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.271929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.271977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.407 qpair failed and we were unable to recover it. 00:35:02.407 [2024-05-15 15:53:15.272152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.272290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.272335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.407 qpair failed and we were unable to recover it. 00:35:02.407 [2024-05-15 15:53:15.272471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.272655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.272698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.407 qpair failed and we were unable to recover it. 00:35:02.407 [2024-05-15 15:53:15.272862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.273018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.273045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.407 qpair failed and we were unable to recover it. 00:35:02.407 [2024-05-15 15:53:15.273157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.273313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.273358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.407 qpair failed and we were unable to recover it. 00:35:02.407 [2024-05-15 15:53:15.273520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.273668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.273698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.407 qpair failed and we were unable to recover it. 00:35:02.407 [2024-05-15 15:53:15.273883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.274015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.274042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.407 qpair failed and we were unable to recover it. 00:35:02.407 [2024-05-15 15:53:15.274158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.274336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.274380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.407 qpair failed and we were unable to recover it. 00:35:02.407 [2024-05-15 15:53:15.274567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.274775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.274820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.407 qpair failed and we were unable to recover it. 00:35:02.407 [2024-05-15 15:53:15.274964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.275105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.275132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.407 qpair failed and we were unable to recover it. 00:35:02.407 [2024-05-15 15:53:15.275258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.275390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.275433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.407 qpair failed and we were unable to recover it. 00:35:02.407 [2024-05-15 15:53:15.275603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.275743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.275769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.407 qpair failed and we were unable to recover it. 00:35:02.407 [2024-05-15 15:53:15.275903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.276046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.407 [2024-05-15 15:53:15.276072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.408 qpair failed and we were unable to recover it. 00:35:02.408 [2024-05-15 15:53:15.276220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.276353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.276398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.408 qpair failed and we were unable to recover it. 00:35:02.408 [2024-05-15 15:53:15.276543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.276723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.276767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.408 qpair failed and we were unable to recover it. 00:35:02.408 [2024-05-15 15:53:15.276915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.277079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.277105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.408 qpair failed and we were unable to recover it. 00:35:02.408 [2024-05-15 15:53:15.277293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.277450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.277484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.408 qpair failed and we were unable to recover it. 00:35:02.408 [2024-05-15 15:53:15.277623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.277764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.277792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.408 qpair failed and we were unable to recover it. 00:35:02.408 [2024-05-15 15:53:15.277934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.278087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.278113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.408 qpair failed and we were unable to recover it. 00:35:02.408 [2024-05-15 15:53:15.278237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.278388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.278442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.408 qpair failed and we were unable to recover it. 00:35:02.408 [2024-05-15 15:53:15.278682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.278855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.278883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.408 qpair failed and we were unable to recover it. 00:35:02.408 [2024-05-15 15:53:15.278993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.279105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.279132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.408 qpair failed and we were unable to recover it. 00:35:02.408 [2024-05-15 15:53:15.279319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.279468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.279497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.408 qpair failed and we were unable to recover it. 00:35:02.408 [2024-05-15 15:53:15.279675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.279831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.279858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.408 qpair failed and we were unable to recover it. 00:35:02.408 [2024-05-15 15:53:15.279976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.280090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.280115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.408 qpair failed and we were unable to recover it. 00:35:02.408 [2024-05-15 15:53:15.280297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.280533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.280591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.408 qpair failed and we were unable to recover it. 00:35:02.408 [2024-05-15 15:53:15.280750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.280911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.280938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.408 qpair failed and we were unable to recover it. 00:35:02.408 [2024-05-15 15:53:15.281104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.281221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.281254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.408 qpair failed and we were unable to recover it. 00:35:02.408 [2024-05-15 15:53:15.281392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.281574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.281646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.408 qpair failed and we were unable to recover it. 00:35:02.408 [2024-05-15 15:53:15.281758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.281897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.281925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.408 qpair failed and we were unable to recover it. 00:35:02.408 [2024-05-15 15:53:15.282075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.282243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.282270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.408 qpair failed and we were unable to recover it. 00:35:02.408 [2024-05-15 15:53:15.282463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.282646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.408 [2024-05-15 15:53:15.282690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.408 qpair failed and we were unable to recover it. 00:35:02.409 [2024-05-15 15:53:15.282856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.282996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.283023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.409 qpair failed and we were unable to recover it. 00:35:02.409 [2024-05-15 15:53:15.283192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.283365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.283414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.409 qpair failed and we were unable to recover it. 00:35:02.409 [2024-05-15 15:53:15.283577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.283723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.283767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.409 qpair failed and we were unable to recover it. 00:35:02.409 [2024-05-15 15:53:15.283888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.284026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.284053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.409 qpair failed and we were unable to recover it. 00:35:02.409 [2024-05-15 15:53:15.284167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.284299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.284344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.409 qpair failed and we were unable to recover it. 00:35:02.409 [2024-05-15 15:53:15.284507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.284658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.284702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.409 qpair failed and we were unable to recover it. 00:35:02.409 [2024-05-15 15:53:15.284846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.284973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.284999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.409 qpair failed and we were unable to recover it. 00:35:02.409 [2024-05-15 15:53:15.285118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.285230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.285261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.409 qpair failed and we were unable to recover it. 00:35:02.409 [2024-05-15 15:53:15.285394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.285574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.285620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.409 qpair failed and we were unable to recover it. 00:35:02.409 [2024-05-15 15:53:15.285812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.285991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.286018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.409 qpair failed and we were unable to recover it. 00:35:02.409 [2024-05-15 15:53:15.286155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.286292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.286320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.409 qpair failed and we were unable to recover it. 00:35:02.409 [2024-05-15 15:53:15.286485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.286662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.286705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.409 qpair failed and we were unable to recover it. 00:35:02.409 [2024-05-15 15:53:15.286844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.287035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.287061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.409 qpair failed and we were unable to recover it. 00:35:02.409 [2024-05-15 15:53:15.287198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.287394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.287438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.409 qpair failed and we were unable to recover it. 00:35:02.409 [2024-05-15 15:53:15.287563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.287729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.287772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.409 qpair failed and we were unable to recover it. 00:35:02.409 [2024-05-15 15:53:15.287959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.288139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.288165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.409 qpair failed and we were unable to recover it. 00:35:02.409 [2024-05-15 15:53:15.288338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.288544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.288587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.409 qpair failed and we were unable to recover it. 00:35:02.409 [2024-05-15 15:53:15.288720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.288896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.288941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.409 qpair failed and we were unable to recover it. 00:35:02.409 [2024-05-15 15:53:15.289084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.289249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.409 [2024-05-15 15:53:15.289278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.409 qpair failed and we were unable to recover it. 00:35:02.410 [2024-05-15 15:53:15.289404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.289555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.289599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.410 qpair failed and we were unable to recover it. 00:35:02.410 [2024-05-15 15:53:15.289747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.289861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.289888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.410 qpair failed and we were unable to recover it. 00:35:02.410 [2024-05-15 15:53:15.290027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.290193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.290225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.410 qpair failed and we were unable to recover it. 00:35:02.410 [2024-05-15 15:53:15.290408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.290589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.290632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.410 qpair failed and we were unable to recover it. 00:35:02.410 [2024-05-15 15:53:15.290825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.290964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.290991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.410 qpair failed and we were unable to recover it. 00:35:02.410 [2024-05-15 15:53:15.291105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.291248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.291275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.410 qpair failed and we were unable to recover it. 00:35:02.410 [2024-05-15 15:53:15.291390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.291495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.291521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.410 qpair failed and we were unable to recover it. 00:35:02.410 [2024-05-15 15:53:15.291670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.291804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.291831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.410 qpair failed and we were unable to recover it. 00:35:02.410 [2024-05-15 15:53:15.291989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.292138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.292165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.410 qpair failed and we were unable to recover it. 00:35:02.410 [2024-05-15 15:53:15.292346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.292518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.292610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.410 qpair failed and we were unable to recover it. 00:35:02.410 [2024-05-15 15:53:15.292791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.292959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.292986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.410 qpair failed and we were unable to recover it. 00:35:02.410 [2024-05-15 15:53:15.293097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.293237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.293265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.410 qpair failed and we were unable to recover it. 00:35:02.410 [2024-05-15 15:53:15.293422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.293567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.293612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.410 qpair failed and we were unable to recover it. 00:35:02.410 [2024-05-15 15:53:15.293736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.293876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.293903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.410 qpair failed and we were unable to recover it. 00:35:02.410 [2024-05-15 15:53:15.294017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.294152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.294178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.410 qpair failed and we were unable to recover it. 00:35:02.410 [2024-05-15 15:53:15.294360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.294544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.294642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.410 qpair failed and we were unable to recover it. 00:35:02.410 [2024-05-15 15:53:15.294812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.294932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.294959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.410 qpair failed and we were unable to recover it. 00:35:02.410 [2024-05-15 15:53:15.295102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.295234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.295262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.410 qpair failed and we were unable to recover it. 00:35:02.410 [2024-05-15 15:53:15.295441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.295572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.295600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.410 qpair failed and we were unable to recover it. 00:35:02.410 [2024-05-15 15:53:15.295804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.410 [2024-05-15 15:53:15.295948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.295975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.411 qpair failed and we were unable to recover it. 00:35:02.411 [2024-05-15 15:53:15.296114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.296296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.296341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.411 qpair failed and we were unable to recover it. 00:35:02.411 [2024-05-15 15:53:15.296508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.296713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.296757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.411 qpair failed and we were unable to recover it. 00:35:02.411 [2024-05-15 15:53:15.296898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.297042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.297068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.411 qpair failed and we were unable to recover it. 00:35:02.411 [2024-05-15 15:53:15.297187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.297339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.297385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.411 qpair failed and we were unable to recover it. 00:35:02.411 [2024-05-15 15:53:15.297541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.297716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.297760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.411 qpair failed and we were unable to recover it. 00:35:02.411 [2024-05-15 15:53:15.297896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.298056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.298082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.411 qpair failed and we were unable to recover it. 00:35:02.411 [2024-05-15 15:53:15.298226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.298378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.298421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.411 qpair failed and we were unable to recover it. 00:35:02.411 [2024-05-15 15:53:15.298585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.298788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.298831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.411 qpair failed and we were unable to recover it. 00:35:02.411 [2024-05-15 15:53:15.298964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.299105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.299133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.411 qpair failed and we were unable to recover it. 00:35:02.411 [2024-05-15 15:53:15.299327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.299512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.299561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.411 qpair failed and we were unable to recover it. 00:35:02.411 [2024-05-15 15:53:15.299712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.299873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.299901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.411 qpair failed and we were unable to recover it. 00:35:02.411 [2024-05-15 15:53:15.300067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.300203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.300237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.411 qpair failed and we were unable to recover it. 00:35:02.411 [2024-05-15 15:53:15.300425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.300609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.300651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.411 qpair failed and we were unable to recover it. 00:35:02.411 [2024-05-15 15:53:15.300841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.300975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.301003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.411 qpair failed and we were unable to recover it. 00:35:02.411 [2024-05-15 15:53:15.301123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.301262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.411 [2024-05-15 15:53:15.301296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.412 qpair failed and we were unable to recover it. 00:35:02.412 [2024-05-15 15:53:15.301498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.301647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.301734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.412 qpair failed and we were unable to recover it. 00:35:02.412 [2024-05-15 15:53:15.301899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.302041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.302068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.412 qpair failed and we were unable to recover it. 00:35:02.412 [2024-05-15 15:53:15.302207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.302353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.302398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.412 qpair failed and we were unable to recover it. 00:35:02.412 [2024-05-15 15:53:15.302586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.302770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.302816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.412 qpair failed and we were unable to recover it. 00:35:02.412 [2024-05-15 15:53:15.302981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.303147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.303174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.412 qpair failed and we were unable to recover it. 00:35:02.412 [2024-05-15 15:53:15.303331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.303508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.303599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.412 qpair failed and we were unable to recover it. 00:35:02.412 [2024-05-15 15:53:15.303770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.303959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.303986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.412 qpair failed and we were unable to recover it. 00:35:02.412 [2024-05-15 15:53:15.304152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.304305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.304351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.412 qpair failed and we were unable to recover it. 00:35:02.412 [2024-05-15 15:53:15.304537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.304746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.304799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.412 qpair failed and we were unable to recover it. 00:35:02.412 [2024-05-15 15:53:15.304910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.305076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.305103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.412 qpair failed and we were unable to recover it. 00:35:02.412 [2024-05-15 15:53:15.305294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.305506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.305548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.412 qpair failed and we were unable to recover it. 00:35:02.412 [2024-05-15 15:53:15.305712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.305839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.305866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.412 qpair failed and we were unable to recover it. 00:35:02.412 [2024-05-15 15:53:15.306009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.306146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.306172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.412 qpair failed and we were unable to recover it. 00:35:02.412 [2024-05-15 15:53:15.306346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.306554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.306610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.412 qpair failed and we were unable to recover it. 00:35:02.412 [2024-05-15 15:53:15.306857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.307020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.307059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.412 qpair failed and we were unable to recover it. 00:35:02.412 [2024-05-15 15:53:15.307173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.307373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.307421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.412 qpair failed and we were unable to recover it. 00:35:02.412 [2024-05-15 15:53:15.307583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.307790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.307833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.412 qpair failed and we were unable to recover it. 00:35:02.412 [2024-05-15 15:53:15.307976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.308146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.308187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.412 qpair failed and we were unable to recover it. 00:35:02.412 [2024-05-15 15:53:15.308374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.308559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.308603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.412 qpair failed and we were unable to recover it. 00:35:02.412 [2024-05-15 15:53:15.308740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.308898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.412 [2024-05-15 15:53:15.308926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.309040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.309156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.309183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.309381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.309570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.309614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.309776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.309955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.309981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.310092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.310274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.310304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.310488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.310659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.310707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.310840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.310982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.311008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.311156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.311332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.311360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.311553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.311820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.311874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.312022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.312145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.312171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.312332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.312493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.312536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.312680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.312853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.312896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.313062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.313211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.313250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.313454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.313587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.313631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.313745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.313882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.313909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.314075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.314226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.314254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.314395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.314533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.314579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.314745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.314910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.314937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.315100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.315220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.315248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.315440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.315611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.315654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.315791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.315946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.315973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.316087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.316252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.316278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.316426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.316564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.316608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.316751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.316895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.316922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.317063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.317209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.317248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.317419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.317595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.317639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.413 qpair failed and we were unable to recover it. 00:35:02.413 [2024-05-15 15:53:15.317786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.413 [2024-05-15 15:53:15.317923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.317954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.318095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.318239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.318266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.318437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.318603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.318692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.318859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.319021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.319048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.319191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.319353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.319398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.319528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.319703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.319748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.319892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.320050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.320078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.320247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.320383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.320427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.320582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.320755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.320797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.320913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.321053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.321080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.321211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.321410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.321458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.321642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.321794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.321820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.321961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.322073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.322100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.322209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.322368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.322411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.322601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.322814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.322880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.323045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.323165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.323191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.323330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.323538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.323584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.323744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.323897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.323923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.324062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.324231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.324258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.324394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.324604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.324662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.324822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.325005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.325035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.325156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.325313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.325359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.325544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.325760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.325818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.325959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.326100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.326126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.326311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.326484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.326512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.326736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.326925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.326951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.327057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.327171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.327198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.327324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.327464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.327492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.327604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.327745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.327772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.327911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.328050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.328078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.414 qpair failed and we were unable to recover it. 00:35:02.414 [2024-05-15 15:53:15.328266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.328473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.414 [2024-05-15 15:53:15.328522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.328676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.328834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.328861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.329007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.329117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.329145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.329327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.329519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.329549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.329735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.329890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.329917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.330055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.330169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.330196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.330325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.330493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.330537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.330653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.330767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.330793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.330923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.331096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.331122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.331252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.331411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.331456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.331616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.331773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.331800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.331957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.332070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.332096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.332287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.332493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.332535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.332675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.332845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.332872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.333047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.333202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.333241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.333371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.333570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.333599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.333776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.333923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.333949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.334060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.334185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.334249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.334408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.334613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.334657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.334822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.334929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.334955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.335096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.335212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.335246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.335442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.335617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.335646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.335798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.335916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.335942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.336052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.336192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.336227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.336369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.336487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.336513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.336658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.336801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.336827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.336938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.337082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.337108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.337265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.337386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.337413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.337549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.337692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.337718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.337860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.337973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.338001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.338111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.338240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.338268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.338417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.338616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.338660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.415 qpair failed and we were unable to recover it. 00:35:02.415 [2024-05-15 15:53:15.338797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.338918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.415 [2024-05-15 15:53:15.338946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.339081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.339226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.339253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.339417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.339554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.339582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.339721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.339835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.339862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.339970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.340105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.340133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.340296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.340483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.340527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.340691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.340835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.340862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.340996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.341131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.341157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.341319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.341526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.341570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.341735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.341918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.341944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.342084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.342226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.342253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.342391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.342556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.342609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.342776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.342918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.342946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.343063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.343179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.343206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.343368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.343519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.343562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.343740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.343886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.343914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.344055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.344195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.344231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.344424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.344730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.344787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.344923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.345070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.345096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.345264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.345444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.345489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.346574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.346846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.346900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.347039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.347183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.347212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.347389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.347608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.347670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.347840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.348038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.348066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.348229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.348397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.348441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.348608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.348764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.348791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.348900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.349045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.349070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.349212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.349433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.349462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.349638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.349797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.349823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.416 qpair failed and we were unable to recover it. 00:35:02.416 [2024-05-15 15:53:15.349993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.350106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.416 [2024-05-15 15:53:15.350133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.350272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.350460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.350506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.350665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.350820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.350846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.350987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.351134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.351160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.351324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.351480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.351523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.351649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.351781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.351807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.351973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.352140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.352167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.352322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.352452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.352483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.352668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.352825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.352851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.352965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.353109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.353136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.353318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.353476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.353520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.353719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.353896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.353921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.354060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.354202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.354241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.354408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.354610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.354653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.354813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.354995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.355021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.355154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.355315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.355359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.355547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.355690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.355733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.355849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.355981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.356007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.356172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.356306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.356351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.356510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.356695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.356738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.356873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.357026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.357052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.357185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.357326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.357371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.357563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.357704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.357730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.357892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.358028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.358054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.358228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.358347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.358374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.358498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.358663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.358689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.358923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.359066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.359093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.359234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.359380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.359407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.359525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.359691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.359717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.359827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.359963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.359990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.360098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.360245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.360272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.360428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.360584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.360628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.360771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.360913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.360940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.361060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.361201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.361234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.417 [2024-05-15 15:53:15.361367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.361580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.417 [2024-05-15 15:53:15.361633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.417 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.361797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.361957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.361985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.362100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.362239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.362278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.362417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.362593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.362635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.362775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.362916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.362941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.363085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.363194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.363225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.363364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.363524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.363568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.363764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.363925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.363951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.364074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.364226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.364252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.364390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.364561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.364604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.364775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.364927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.364953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.365075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.365227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.365254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.365448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.365727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.365784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.365928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.366087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.366114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.366280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.366470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.366515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.366701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.366884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.366910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.367023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.367169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.367196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.367385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.367572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.367616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.367771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.367924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.367950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.368101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.368245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.368280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.368444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.368653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.368695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.368867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.368974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.369000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.369112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.369278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.369305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.369427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.369587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.369613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.369745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.369878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.369904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.370016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.370156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.370183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.370334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.370512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.370581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.370756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.370892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.370918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.371062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.371226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.371253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.371380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.371535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.371565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.371768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.371896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.371923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.372085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.372226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.372252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.372424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.372684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.372743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.372933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.373113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.373139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.373303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.373452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.418 [2024-05-15 15:53:15.373498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.418 qpair failed and we were unable to recover it. 00:35:02.418 [2024-05-15 15:53:15.373688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.373941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.374000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.374138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.374272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.374322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.374483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.374694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.374737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.374925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.375083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.375109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.375235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.375372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.375416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.375570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.375732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.375758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.375876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.376018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.376044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.376192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.376344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.376370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.376513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.376674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.376718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.376856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.376963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.376989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.377126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.377254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.377282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.377444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.377621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.377670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.377836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.377972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.377998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.378115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.378227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.378257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.378402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.378573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.378599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.378764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.378904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.378930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.379061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.379209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.379243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.379404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.379594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.379620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.379806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.379966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.379991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.380108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.380240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.380266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.380416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.380596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.380639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.380841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.380984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.381014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.381158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.381319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.381363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.381534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.381733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.381776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.381916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.382057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.382082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.382224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.382388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.382417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.382567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.382780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.382825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.382939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.383074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.383101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.383286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.383486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.383514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.383697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.383881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.383908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.384047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.384211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.384246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.384376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.384563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.384608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.384777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.384950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.384996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.419 qpair failed and we were unable to recover it. 00:35:02.419 [2024-05-15 15:53:15.385162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.385324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.419 [2024-05-15 15:53:15.385369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.385530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.385715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.385759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.385909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.386061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.386087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.386228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.386387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.386430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.386585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.386846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.386898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.387028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.387194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.387225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.387344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.387502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.387545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.387737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.387918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.387960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.388100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.388240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.388267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.388459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.388673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.388726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.388914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.389071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.389098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.389281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.389469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.389523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.389691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.389849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.389876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.390014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.390167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.390194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.390408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.390678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.390730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.390891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.391051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.391078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.391181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.391329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.391355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.391496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.391630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.391660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.391824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.391989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.392016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.392182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.392325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.392369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.392526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.392800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.392843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.392966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.393109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.393140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.393332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.393580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.393640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.393762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.393915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.393942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.394111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.394257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.394285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.394426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.394616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.394669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.394831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.394986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.395013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.395147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.395314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.395358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.395524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.395706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.395735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.395893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.396030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.420 [2024-05-15 15:53:15.396057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.420 qpair failed and we were unable to recover it. 00:35:02.420 [2024-05-15 15:53:15.396226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.396387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.396415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.396597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.396812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.396856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.397027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.397179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.397206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.397406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.397556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.397623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.397799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.397930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.397957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.398078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.398227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.398267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.398412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.398615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.398686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.398829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.398994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.399021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.399157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.399335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.399380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.399548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.399811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.399868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.400036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.400208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.400242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.400403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.400691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.400745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.400937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.401094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.401120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.401231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.401467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.401520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.401684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.401872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.401916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.402081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.402225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.402266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.402397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.402669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.402719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.402902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.403071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.403098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.403240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.403383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.403408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.403558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.403831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.403881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.404013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.404147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.404180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.404382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.404616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.404643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.404810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.405033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.405059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.405222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.405410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.405436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.405612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.405790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.405835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.405978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.406121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.406148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.406308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.406481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.406525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.406659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.406808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.406845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.406992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.407153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.407179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.407364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.407662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.407716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.407953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.408131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.408158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.408293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.408448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.408501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.408687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.408889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.408916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.409093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.409264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.409293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.409451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.409593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.409639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.409782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.409899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.409926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.421 qpair failed and we were unable to recover it. 00:35:02.421 [2024-05-15 15:53:15.410059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.421 [2024-05-15 15:53:15.410168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.410195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.410378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.410570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.410642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.410844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.411008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.411034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.411177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.411327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.411354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.411489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.411692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.411736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.411874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.412056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.412092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.412203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.412367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.412411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.412605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.412787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.412830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.412947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.413083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.413110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.413263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.413431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.413460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.413681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.413857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.413901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.414036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.414179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.414206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.414358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.414562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.414607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.414771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.415025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.415080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.415239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.415393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.415421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.415609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.415811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.415863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.415975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.416090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.416118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.416280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.416419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.416445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.416568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.416707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.416735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.416850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.416958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.416986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.417132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.417357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.417404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.417568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.417761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.417788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.417934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.418049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.418076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.418224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.418397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.418442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.418676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.418839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.418866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.419082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.419225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.419254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.419411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.419711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.419766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.419912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.420046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.420072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.420209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.420379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.420424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.420598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.420755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.420797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.420957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.421120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.421147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.421284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.421473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.422 [2024-05-15 15:53:15.421517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.422 qpair failed and we were unable to recover it. 00:35:02.422 [2024-05-15 15:53:15.421648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.421832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.421859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.422003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.422158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.422185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.422368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.422497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.422525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.422718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.422899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.422928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.423103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.423310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.423360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.423499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.423726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.423769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.423941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.424093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.424119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.424241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.424365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.424409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.424597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.424726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.424754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.424906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.425042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.425069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.425176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.425348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.425393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.425537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.425730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.425773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.425920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.426063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.426090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.426232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.426377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.426424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.426580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.426796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.426823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.426964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.427101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.427133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.427310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.427467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.427506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.427678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.427819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.427846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.428013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.428235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.428272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.428456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.428638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.428681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.428878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.428994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.429021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.429159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.429332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.429381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.429568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.429754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.429796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.429931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.430075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.430103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.430244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.430419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.430462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.430597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.430754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.430781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.430943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.431084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.431111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.431258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.431421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.431450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.431659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.431817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.431843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.431983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.432092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.432118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.432234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.432380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.432423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.432555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.432735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.432765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.423 qpair failed and we were unable to recover it. 00:35:02.423 [2024-05-15 15:53:15.432882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.423 [2024-05-15 15:53:15.433020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.433046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.433211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.433399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.433428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.433606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.433807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.433851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.434016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.434153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.434179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.434354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.434511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.434556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.434697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.434901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.434943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.435079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.435222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.435250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.435440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.435588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.435631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.435816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.435976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.436003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.436113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.436300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.436349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.436515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.436812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.436867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.437031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.437192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.437224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.437368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.437484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.437510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.437619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.437768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.437813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.438034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.438175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.438202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.438404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.438583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.438629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.438819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.438949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.438977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.439098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.439242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.439272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.439436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.439610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.439636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.439797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.439937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.439967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.440109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.440241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.440269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.440436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.440610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.440654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.440840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.440974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.441002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.441140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.441315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.441360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.441534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.441734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.441778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.441940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.442054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.442081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.442213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.442386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.442432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.442607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.442767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.442812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.442950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.443132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.443158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.443316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.443486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.443528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.443745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.443931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.443957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.444098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.444242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.444269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.444405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.444581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.444628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.444821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.444947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.444973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.445090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.445234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.445261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.445402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.445520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.445547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.445710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.445835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.445862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.445976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.446127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.446153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.446321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.446511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.446555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.424 qpair failed and we were unable to recover it. 00:35:02.424 [2024-05-15 15:53:15.446740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.446923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.424 [2024-05-15 15:53:15.446949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.447094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.447236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.447263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.447398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.447570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.447613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.447814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.447991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.448017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.448165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.448281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.448308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.448441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.448615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.448660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.448793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.448957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.448983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.449119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.449276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.449305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.449509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.449796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.449840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.449978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.450115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.450141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.450303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.450476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.450519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.450814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.451013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.451039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.451178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.451434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.451477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.451637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.451817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.451861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.452001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.452144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.452171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.452341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.452529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.452574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.452737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.452895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.452921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.453069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.453250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.453279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.453411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.453567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.453611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.453747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.453867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.453893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.454001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.454161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.454187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.454367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.454656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.454706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.454894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.455676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.455708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.455937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.456123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.456150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.456334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.456511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.456555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.456717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.456927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.456969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.457110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.457227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.457253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.457381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.457626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.457668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.457848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.457988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.458014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.458129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.458298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.458326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.458491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.458668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.458712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.458855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.458973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.458999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.459155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.459273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.459300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.459451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.459597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.459625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.459770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.459902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.459929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.425 qpair failed and we were unable to recover it. 00:35:02.425 [2024-05-15 15:53:15.460080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.425 [2024-05-15 15:53:15.460272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.460302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.460481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.460638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.460664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.460782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.460945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.460972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.461113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.461264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.461290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.461432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.461548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.461574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.461714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.461851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.461878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.462028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.462141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.462168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.462294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.462407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.462433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.462577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.462718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.462746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.462892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.463026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.463052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.463226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.463361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.463405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.463561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.463740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.463784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.463944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.464096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.464122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.464348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.464527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.464571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.464736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.464977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.465004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.465157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.465296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.465340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.465506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.465676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.465718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.465857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.466002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.466028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.466179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.466321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.466368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.466504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.466673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.466702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.466896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.467039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.467065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.467240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.467443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.467492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.467635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.467772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.467797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.467959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.468099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.468125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.468291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.468458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.468484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.468624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.468763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.468790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.468958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.469070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.469096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.469230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.469419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.469463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.469650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.469808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.469834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.469982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.470146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.470172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.470402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.470564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.470590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.470734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.470875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.470901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.471044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.471182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.471208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.471335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.471450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.471478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.471623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.471733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.471759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.471977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.472140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.472166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.472338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.472462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.472488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.472631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.472761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.472787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.426 qpair failed and we were unable to recover it. 00:35:02.426 [2024-05-15 15:53:15.472953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.426 [2024-05-15 15:53:15.473096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.473124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.427 qpair failed and we were unable to recover it. 00:35:02.427 [2024-05-15 15:53:15.473264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.473383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.473409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.427 qpair failed and we were unable to recover it. 00:35:02.427 [2024-05-15 15:53:15.473551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.473689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.473715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.427 qpair failed and we were unable to recover it. 00:35:02.427 [2024-05-15 15:53:15.473832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.473997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.474023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.427 qpair failed and we were unable to recover it. 00:35:02.427 [2024-05-15 15:53:15.474132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.474278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.474307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.427 qpair failed and we were unable to recover it. 00:35:02.427 [2024-05-15 15:53:15.474428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.474541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.474569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.427 qpair failed and we were unable to recover it. 00:35:02.427 [2024-05-15 15:53:15.474714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.474873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.474900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.427 qpair failed and we were unable to recover it. 00:35:02.427 [2024-05-15 15:53:15.475036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.475182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.475207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.427 qpair failed and we were unable to recover it. 00:35:02.427 [2024-05-15 15:53:15.475363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.475547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.475589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.427 qpair failed and we were unable to recover it. 00:35:02.427 [2024-05-15 15:53:15.475716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.475877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.475903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.427 qpair failed and we were unable to recover it. 00:35:02.427 [2024-05-15 15:53:15.476042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.476196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.476229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.427 qpair failed and we were unable to recover it. 00:35:02.427 [2024-05-15 15:53:15.476419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.476562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.476589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.427 qpair failed and we were unable to recover it. 00:35:02.427 [2024-05-15 15:53:15.476723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.476860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.427 [2024-05-15 15:53:15.476888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.427 qpair failed and we were unable to recover it. 00:35:02.692 [2024-05-15 15:53:15.477027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.477170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.477196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.692 qpair failed and we were unable to recover it. 00:35:02.692 [2024-05-15 15:53:15.477351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.477517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.477547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.692 qpair failed and we were unable to recover it. 00:35:02.692 [2024-05-15 15:53:15.477731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.477894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.477922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.692 qpair failed and we were unable to recover it. 00:35:02.692 [2024-05-15 15:53:15.478038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.478174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.478201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.692 qpair failed and we were unable to recover it. 00:35:02.692 [2024-05-15 15:53:15.478354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.478498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.478542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.692 qpair failed and we were unable to recover it. 00:35:02.692 [2024-05-15 15:53:15.478660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.478777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.478803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.692 qpair failed and we were unable to recover it. 00:35:02.692 [2024-05-15 15:53:15.478943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.479061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.479086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.692 qpair failed and we were unable to recover it. 00:35:02.692 [2024-05-15 15:53:15.479235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.479388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.479432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.692 qpair failed and we were unable to recover it. 00:35:02.692 [2024-05-15 15:53:15.479602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.479809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.479851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.692 qpair failed and we were unable to recover it. 00:35:02.692 [2024-05-15 15:53:15.479995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.480143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.480170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.692 qpair failed and we were unable to recover it. 00:35:02.692 [2024-05-15 15:53:15.480347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.480535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.480565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.692 qpair failed and we were unable to recover it. 00:35:02.692 [2024-05-15 15:53:15.480742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.480897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.480923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.692 qpair failed and we were unable to recover it. 00:35:02.692 [2024-05-15 15:53:15.481070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.481185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.481211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.692 qpair failed and we were unable to recover it. 00:35:02.692 [2024-05-15 15:53:15.481419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.481581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.481625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.692 qpair failed and we were unable to recover it. 00:35:02.692 [2024-05-15 15:53:15.481821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.481954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.481980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.692 qpair failed and we were unable to recover it. 00:35:02.692 [2024-05-15 15:53:15.482120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.482275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.482302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.692 qpair failed and we were unable to recover it. 00:35:02.692 [2024-05-15 15:53:15.482451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.482594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.692 [2024-05-15 15:53:15.482621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.692 qpair failed and we were unable to recover it. 00:35:02.692 [2024-05-15 15:53:15.482788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.482922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.482948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.483083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.483195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.483228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.483404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.483543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.483571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.483689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.483833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.483859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.483970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.484124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.484152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.484299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.484459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.484488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.484645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.484805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.484831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.484994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.485158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.485185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.485333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.485486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.485535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.485670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.485801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.485827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.485969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.486083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.486110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.486279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.486500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.486531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.486675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.486813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.486839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.486950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.487071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.487097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.487227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.487350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.487376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.487570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.487703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.487729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.487872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.487976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.488001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.488144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.488773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.488808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.488982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.489100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.489131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.489293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.489450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.489496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.489643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.493250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.493285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.493461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.493610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.493638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.493758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.493904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.493933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.494120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.494279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.494307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.494459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.494623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.494649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.494823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.494944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.494970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.495121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.495276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.495305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.495459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.495618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.495646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.495795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.495919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.495950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.693 qpair failed and we were unable to recover it. 00:35:02.693 [2024-05-15 15:53:15.496058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.693 [2024-05-15 15:53:15.496185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.496213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.496379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.496591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.496626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.496793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.496961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.496989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.497107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.497239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.497277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.497433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.497604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.497647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.497765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.497886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.497913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.498077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.498270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.498299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.498478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.498682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.498727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.498883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.499036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.499062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.499169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.499334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.499389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.499597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.499742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.499783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.499960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.500093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.500119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.500274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.500475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.500519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.500674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.500812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.500838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.500953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.501064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.501090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.501274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.501434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.501459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.501574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.501696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.501720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.501870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.501983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.502008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.502176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.502323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.502353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.502540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.502716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.502760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.502992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.503198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.503233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.503424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.503595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.503640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.503798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.503977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.504020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.504156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.504319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.504364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.504524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.504716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.504759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.504967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.505135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.505161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.505322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.505525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.505571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.505735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.505903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.505949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.506080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.506225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.506253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.506407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.506583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.694 [2024-05-15 15:53:15.506613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.694 qpair failed and we were unable to recover it. 00:35:02.694 [2024-05-15 15:53:15.506817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.506975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.507001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.507135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.507300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.507344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.507505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.507711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.507755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.507874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.508016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.508044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.508184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.508373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.508419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.508580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.508788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.508832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.508959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.509102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.509128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.509252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.509388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.509432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.509587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.509762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.509805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.509917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.510036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.510062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.510207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.510368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.510413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.510546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.510675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.510700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.510861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.510999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.511024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.511165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.511356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.511401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.511559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.511742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.511785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.511950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.512091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.512117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.512236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.512396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.512439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.512588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.512769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.512811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.512924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.513064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.513091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.513258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.513420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.513464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.513626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.513785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.513811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.513974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.514114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.514141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.514303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.514494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.514537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.514689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.514872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.514898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.515045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.515187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.515214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.515390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.515544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.515588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.515726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.515893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.515919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.516084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.516194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.516231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.516394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.516546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.516590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.695 qpair failed and we were unable to recover it. 00:35:02.695 [2024-05-15 15:53:15.516734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.516889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.695 [2024-05-15 15:53:15.516915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.696 qpair failed and we were unable to recover it. 00:35:02.696 [2024-05-15 15:53:15.517060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.517163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.517188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.696 qpair failed and we were unable to recover it. 00:35:02.696 [2024-05-15 15:53:15.517309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.517412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.517437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.696 qpair failed and we were unable to recover it. 00:35:02.696 [2024-05-15 15:53:15.517547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.517714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.517739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.696 qpair failed and we were unable to recover it. 00:35:02.696 [2024-05-15 15:53:15.517883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.517997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.518023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.696 qpair failed and we were unable to recover it. 00:35:02.696 [2024-05-15 15:53:15.518157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.518328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.518355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.696 qpair failed and we were unable to recover it. 00:35:02.696 [2024-05-15 15:53:15.518500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.518681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.518725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.696 qpair failed and we were unable to recover it. 00:35:02.696 [2024-05-15 15:53:15.518881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.519039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.519064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.696 qpair failed and we were unable to recover it. 00:35:02.696 [2024-05-15 15:53:15.519212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.519346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.519390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.696 qpair failed and we were unable to recover it. 00:35:02.696 [2024-05-15 15:53:15.519527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.519730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.519774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.696 qpair failed and we were unable to recover it. 00:35:02.696 [2024-05-15 15:53:15.519938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.520070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.520096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.696 qpair failed and we were unable to recover it. 00:35:02.696 [2024-05-15 15:53:15.520273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.520406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.520432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.696 qpair failed and we were unable to recover it. 00:35:02.696 [2024-05-15 15:53:15.520575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.520685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.520711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.696 qpair failed and we were unable to recover it. 00:35:02.696 [2024-05-15 15:53:15.520823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.520964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.520989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.696 qpair failed and we were unable to recover it. 00:35:02.696 [2024-05-15 15:53:15.521101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.521242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.521269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.696 qpair failed and we were unable to recover it. 00:35:02.696 [2024-05-15 15:53:15.521435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.521611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.521655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.696 qpair failed and we were unable to recover it. 00:35:02.696 [2024-05-15 15:53:15.521776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.521911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.521937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.696 qpair failed and we were unable to recover it. 00:35:02.696 [2024-05-15 15:53:15.522048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.522189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.522224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.696 qpair failed and we were unable to recover it. 00:35:02.696 [2024-05-15 15:53:15.522390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.522566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.522609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.696 qpair failed and we were unable to recover it. 00:35:02.696 [2024-05-15 15:53:15.522751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.522877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.522903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.696 qpair failed and we were unable to recover it. 00:35:02.696 [2024-05-15 15:53:15.523056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.523167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.523193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.696 qpair failed and we were unable to recover it. 00:35:02.696 [2024-05-15 15:53:15.523324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.523440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.523465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.696 qpair failed and we were unable to recover it. 00:35:02.696 [2024-05-15 15:53:15.523620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.523766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.523792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.696 qpair failed and we were unable to recover it. 00:35:02.696 [2024-05-15 15:53:15.523904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.524025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.524051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.696 qpair failed and we were unable to recover it. 00:35:02.696 [2024-05-15 15:53:15.524187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.696 [2024-05-15 15:53:15.524340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.524367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.524507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.524680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.524724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.524877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.525036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.525063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.525241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.525402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.525431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.525611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.525801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.525844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.525986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.526126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.526152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.526332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.526510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.526555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.526698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.526831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.526857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.526972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.527135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.527162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.527290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.527446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.527491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.527654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.527827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.527874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.527991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.528103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.528130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.528279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.528439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.528468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.528675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.528808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.528834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.528950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.529071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.529097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.529249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.529392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.529436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.529598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.529760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.529786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.529901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.530074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.530100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.530224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.530402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.530445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.530606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.530756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.530799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.530915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.531089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.531115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.531261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.531419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.531445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.531598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.531716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.531746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.531884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.532004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.532030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.532174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.532311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.532361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.532503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.532661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.532704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.532821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.532982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.533008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.533125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.533268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.533295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.533438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.533607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.533634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.697 [2024-05-15 15:53:15.533772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.533936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.697 [2024-05-15 15:53:15.533962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.697 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.534121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.534262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.534289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.534428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.534566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.534592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.534731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.534885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.534911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.535051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.535226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.535252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.535410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.535602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.535630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.535833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.536016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.536042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.536179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.536352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.536398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.536588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.536798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.536841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.537009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.537123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.537149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.537300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.537506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.537550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.537717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.537847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.537875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.538014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.538130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.538157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.538315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.538529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.538557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.538741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.538873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.538900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.539041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.539207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.539239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.539389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.539565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.539610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.539752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.539901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.539928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.540066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.540209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.540250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.540390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.540575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.540619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.540756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.540913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.540940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.541081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.541196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.541230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.541351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.541495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.541522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.541645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.541784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.541810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.541956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.542097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.542124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.542288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.542452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.542478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.542622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.542754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.542780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.542883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.542999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.543026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.543167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.543337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.543386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.543577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.543718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.543761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.698 qpair failed and we were unable to recover it. 00:35:02.698 [2024-05-15 15:53:15.543890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.698 [2024-05-15 15:53:15.544052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.544079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.544245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.544403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.544448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.544632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.544802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.544829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.544957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.545085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.545109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.545244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.545399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.545441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.545643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.545793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.545834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.545941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.546048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.546072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.546237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.546374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.546415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.546581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.546737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.546766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.546912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.547045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.547069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.547234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.547410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.547453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.547647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.547782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.547806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.547948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.548087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.548111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.548251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.548410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.548451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.548642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.548794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.548817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.548927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.549071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.549096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.549223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.549360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.549402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.549562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.549734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.549775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.549914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.550052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.550084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.550208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.550351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.550391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.550553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.550702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.550742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.550890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.551020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.551044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.551180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.551328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.551375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.551530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.551670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.551715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.551867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.552007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.552031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.552138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.552276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.552303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.552435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.552549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.552573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.552695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.552875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.552899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.553050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.553169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.553192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.553360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.553502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.699 [2024-05-15 15:53:15.553528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.699 qpair failed and we were unable to recover it. 00:35:02.699 [2024-05-15 15:53:15.553683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.553813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.553839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.554024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.554167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.554191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.554339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.554454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.554478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.554596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.554705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.554729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.554865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.554972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.554998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.555143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.555277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.555302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.555415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.555550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.555574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.555728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.555871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.555896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.556035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.556187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.556213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.556355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.556519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.556544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.556691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.556835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.556860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.556978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.557120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.557144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.557261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.557374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.557399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.557537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.557680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.557705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.557844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.557976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.558001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.558144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.558251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.558277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.558397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.558557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.558581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.558732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.558870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.558894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.559042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.559178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.559204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.559376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.559538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.559563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.559706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.559845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.559870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.559980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.560119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.560143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.560364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.560526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.560550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.560658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.560829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.560853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.560995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.561104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.561130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.561255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.561386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.561410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.561560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.561730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.561754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.561865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.562006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.562032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.562150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.562320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.562345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.700 qpair failed and we were unable to recover it. 00:35:02.700 [2024-05-15 15:53:15.562467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.700 [2024-05-15 15:53:15.562631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.562655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.562793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.562899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.562923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.563089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.563204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.563234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.563376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.563515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.563539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.563679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.563814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.563838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.563945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.564106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.564131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.564269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.564378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.564403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.564546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.564682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.564706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.564847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.565062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.565086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.565226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.565380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.565404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.565541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.565672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.565695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.565829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.565962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.565986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.566126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.566274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.566299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.566417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.566555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.566579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.566747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.566863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.566888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.567031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.567166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.567190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.567321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.567463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.567487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.567596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.567734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.567758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.567926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.568063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.568087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.568205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.568345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.568370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.568513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.568642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.568666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.568819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.568958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.568984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.569102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.569263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.569289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.569457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.569622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.569646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.569779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.569894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.569920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.701 [2024-05-15 15:53:15.570083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.570226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.701 [2024-05-15 15:53:15.570251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.701 qpair failed and we were unable to recover it. 00:35:02.702 [2024-05-15 15:53:15.570390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.702 [2024-05-15 15:53:15.570500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.702 [2024-05-15 15:53:15.570525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.702 qpair failed and we were unable to recover it. 00:35:02.702 [2024-05-15 15:53:15.570663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.702 [2024-05-15 15:53:15.570802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.702 [2024-05-15 15:53:15.570826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.702 qpair failed and we were unable to recover it. 00:35:02.702 [2024-05-15 15:53:15.570979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.702 [2024-05-15 15:53:15.571143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.702 [2024-05-15 15:53:15.571174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.702 qpair failed and we were unable to recover it. 00:35:02.702 [2024-05-15 15:53:15.571296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.702 [2024-05-15 15:53:15.571424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.702 [2024-05-15 15:53:15.571448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.702 qpair failed and we were unable to recover it. 00:35:02.702 [2024-05-15 15:53:15.571616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.702 [2024-05-15 15:53:15.571739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.702 [2024-05-15 15:53:15.571763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.702 qpair failed and we were unable to recover it. 00:35:02.702 [2024-05-15 15:53:15.571865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.702 [2024-05-15 15:53:15.572026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.702 [2024-05-15 15:53:15.572050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.702 qpair failed and we were unable to recover it. 00:35:02.702 [2024-05-15 15:53:15.572211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.702 [2024-05-15 15:53:15.572340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.702 [2024-05-15 15:53:15.572366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.702 qpair failed and we were unable to recover it. 00:35:02.702 [2024-05-15 15:53:15.572511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.702 [2024-05-15 15:53:15.572654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.702 [2024-05-15 15:53:15.572678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.702 qpair failed and we were unable to recover it. 00:35:02.702 [2024-05-15 15:53:15.572809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.702 [2024-05-15 15:53:15.572955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.702 [2024-05-15 15:53:15.572981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:02.702 qpair failed and we were unable to recover it. 00:35:02.702 [2024-05-15 15:53:15.573148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.702 [2024-05-15 15:53:15.573295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.702 [2024-05-15 15:53:15.573320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.286 qpair failed and we were unable to recover it. 00:35:03.286 [2024-05-15 15:53:16.076404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.286 [2024-05-15 15:53:16.076600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.286 [2024-05-15 15:53:16.076629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.286 qpair failed and we were unable to recover it. 00:35:03.286 [2024-05-15 15:53:16.076780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.286 [2024-05-15 15:53:16.076903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.286 [2024-05-15 15:53:16.076929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.286 qpair failed and we were unable to recover it. 00:35:03.286 [2024-05-15 15:53:16.077104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.286 [2024-05-15 15:53:16.077235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.286 [2024-05-15 15:53:16.077265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.286 qpair failed and we were unable to recover it. 00:35:03.286 [2024-05-15 15:53:16.077388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.286 [2024-05-15 15:53:16.077515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.286 [2024-05-15 15:53:16.077540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.286 qpair failed and we were unable to recover it. 00:35:03.286 [2024-05-15 15:53:16.077700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.286 [2024-05-15 15:53:16.077823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.286 [2024-05-15 15:53:16.077848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.286 qpair failed and we were unable to recover it. 00:35:03.286 [2024-05-15 15:53:16.077990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.286 [2024-05-15 15:53:16.078159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.286 [2024-05-15 15:53:16.078183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.078317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.078428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.078452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.078582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.078725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.078751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.078869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.079009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.079033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.079195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.079360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.079386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.079502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.079622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.079649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.079792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.079936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.079962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.080126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.080298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.080326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.080474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.080580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.080606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.080748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.080872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.080898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.081033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.081170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.081196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.081355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.081467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.081493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.081635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.081800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.081827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.081967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.082085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.082111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.082254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.082373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.082399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.082521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.082655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.082682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.082849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.083016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.083042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.083154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.083276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.083302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.083442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.083588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.083616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.083786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.084015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.084042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.084210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.084397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.084424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.084651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.084768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.084809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.084918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.085066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.085092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.085208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.085345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.085372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.085592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.085738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.085764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.085890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.086041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.086068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.086233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.086384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.086410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.086566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.086712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.287 [2024-05-15 15:53:16.086739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.287 qpair failed and we were unable to recover it. 00:35:03.287 [2024-05-15 15:53:16.086902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.087016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.087042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.087209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.087344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.087371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.087492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.087628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.087656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.087771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.087938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.087965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.088079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.088189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.088223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.088377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.088497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.088523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.088735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.088868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.088911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.089061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.089182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.089209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.089368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.089512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.089540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.089663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.089801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.089828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.089967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.090099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.090126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.090275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.090499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.090527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.090689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.090832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.090860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.090997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.091139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.091166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.091310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.091444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.091470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.091641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.091805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.091832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.091960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.092101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.092128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.092347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.092577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.092604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.092775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.092949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.092975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.093127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.093268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.093296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.093448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.093625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.093652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.093815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.093919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.093950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.094118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.094337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.094363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.094505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.094667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.094694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.094861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.094988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.095014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.095156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.095308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.095353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.095494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.095641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.095668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.095888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.096027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.096053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.096223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.096354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.096381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.096600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.096765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.096792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.288 qpair failed and we were unable to recover it. 00:35:03.288 [2024-05-15 15:53:16.096911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.288 [2024-05-15 15:53:16.097071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.097098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.097243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.097410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.097440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.097549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.097659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.097685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.097823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.097960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.097986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.098126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.098272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.098299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.098445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.098595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.098622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.098841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.098966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.098993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.099136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.099273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.099300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.099467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.099614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.099641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.099780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.099920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.099947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.100114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.100234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.100267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.100388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.100543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.100574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.100690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.100855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.100882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.101029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.101171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.101199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.101365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.101484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.101519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.101674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.101790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.101817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.101957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.102090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.102116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.102282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.102399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.102425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.102600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.102716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.102744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.102909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.103026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.103053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.103223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.103370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.103396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.103546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.103713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.103744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.103860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.103996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.104022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.104159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.104306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.104334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.104470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.104593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.104620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.104740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.104887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.104915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.105031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.105198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.105230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.105373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.105521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.105548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.289 [2024-05-15 15:53:16.105703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.105846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.289 [2024-05-15 15:53:16.105874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.289 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.106017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.106157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.106183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.106345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.106464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.106500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.106639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.106785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.106812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.106956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.107092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.107119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.107228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.107359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.107385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.107538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.107682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.107708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.107832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.107946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.107973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.108140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.108280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.108309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.108436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.108550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.108577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.108712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.108814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.108841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.108986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.109107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.109134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.109292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.109428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.109454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.109621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.109788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.109815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.109941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.110072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.110098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.110236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.110352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.110379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.110519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.110656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.110683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.110810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.110953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.110980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.111200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.111341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.111368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.111512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.111652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.111678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.111819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.111940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.111968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.112074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.112220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.112256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.112478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.112644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.112671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.112835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.112998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.113025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.113172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.113320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.113348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.113460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.113629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.113655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.113792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.113903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.113931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.114072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.114191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.114223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.114327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.114543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.114569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.114687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.114850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.114881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.290 qpair failed and we were unable to recover it. 00:35:03.290 [2024-05-15 15:53:16.115046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.115186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.290 [2024-05-15 15:53:16.115213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.291 qpair failed and we were unable to recover it. 00:35:03.291 [2024-05-15 15:53:16.115338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.115447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.115475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.291 qpair failed and we were unable to recover it. 00:35:03.291 [2024-05-15 15:53:16.115611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.115748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.115774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.291 qpair failed and we were unable to recover it. 00:35:03.291 [2024-05-15 15:53:16.115910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.116085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.116112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.291 qpair failed and we were unable to recover it. 00:35:03.291 [2024-05-15 15:53:16.116227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.116449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.116477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.291 qpair failed and we were unable to recover it. 00:35:03.291 [2024-05-15 15:53:16.116622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.116792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.116819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.291 qpair failed and we were unable to recover it. 00:35:03.291 [2024-05-15 15:53:16.117042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.117241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.117269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.291 qpair failed and we were unable to recover it. 00:35:03.291 [2024-05-15 15:53:16.117408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.117572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.117598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.291 qpair failed and we were unable to recover it. 00:35:03.291 [2024-05-15 15:53:16.117732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.117908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.117935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.291 qpair failed and we were unable to recover it. 00:35:03.291 [2024-05-15 15:53:16.118052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.118176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.118202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.291 qpair failed and we were unable to recover it. 00:35:03.291 [2024-05-15 15:53:16.118329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.118436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.118463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.291 qpair failed and we were unable to recover it. 00:35:03.291 [2024-05-15 15:53:16.118602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.118772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.118798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.291 qpair failed and we were unable to recover it. 00:35:03.291 [2024-05-15 15:53:16.118911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.119061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.119088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.291 qpair failed and we were unable to recover it. 00:35:03.291 [2024-05-15 15:53:16.119229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.119345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.119372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.291 qpair failed and we were unable to recover it. 00:35:03.291 [2024-05-15 15:53:16.119541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.119703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.119730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.291 qpair failed and we were unable to recover it. 00:35:03.291 [2024-05-15 15:53:16.119839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.119972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.119999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.291 qpair failed and we were unable to recover it. 00:35:03.291 [2024-05-15 15:53:16.120137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.120278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.120307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.291 qpair failed and we were unable to recover it. 00:35:03.291 [2024-05-15 15:53:16.120425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.120567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.120594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.291 qpair failed and we were unable to recover it. 00:35:03.291 [2024-05-15 15:53:16.120702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.120842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.120868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.291 qpair failed and we were unable to recover it. 00:35:03.291 [2024-05-15 15:53:16.121022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.121132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.121159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.291 qpair failed and we were unable to recover it. 00:35:03.291 [2024-05-15 15:53:16.121307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.121474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.121501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.291 qpair failed and we were unable to recover it. 00:35:03.291 [2024-05-15 15:53:16.121720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.121864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.121891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.291 qpair failed and we were unable to recover it. 00:35:03.291 [2024-05-15 15:53:16.122001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.122109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.122136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.291 qpair failed and we were unable to recover it. 00:35:03.291 [2024-05-15 15:53:16.122273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.122409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.291 [2024-05-15 15:53:16.122435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.291 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.122576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.122712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.122739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.122884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.123017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.123054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.123194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.123340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.123368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.123502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.123668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.123694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.123836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.123978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.124004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.124112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.124256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.124285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.124429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.124575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.124602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.124766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.124886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.124912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.125131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.125312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.125340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.125456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.125595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.125621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.125790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.125927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.125954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.126105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.126223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.126252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.126368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.126478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.126505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.126638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.126776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.126803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.126946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.127088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.127115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.127255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.127430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.127458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.127595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.127740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.127768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.127885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.128026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.128053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.128167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.128314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.128343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.128458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.128633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.128660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.128803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.128944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.128972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.129145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.129294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.129322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.129490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.129628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.129654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.129804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.129947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.129974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.292 [2024-05-15 15:53:16.130095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.130244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.292 [2024-05-15 15:53:16.130272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.292 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.130415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.130531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.130559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.130670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.130809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.130836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.131000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.131134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.131160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.131298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.131411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.131438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.131580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.131714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.131741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.131881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.132031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.132058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.132196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.132340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.132369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.132492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.132640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.132666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.132805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.132919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.132947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.133051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.133190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.133242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.133356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.133519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.133547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.133681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.133817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.133844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.133988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.134132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.134159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.134275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.134431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.134459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.134598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.134711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.134738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.134887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.135032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.135060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.135172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.135330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.135358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.135508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.135646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.135674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.135845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.136014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.136041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.136186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.136323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.136352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.136475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.136646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.136674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.136822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.136985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.137012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.137152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.137284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.137311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.137444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.137568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.137595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.137712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.137856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.137884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.138026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.138200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.138234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.138390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.138525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.138552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.138713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.138852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.138879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.293 qpair failed and we were unable to recover it. 00:35:03.293 [2024-05-15 15:53:16.138996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.139169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.293 [2024-05-15 15:53:16.139196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.139314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.139425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.139453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.139592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.139757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.139784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.139928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.140093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.140119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.140232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.140374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.140402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.140571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.140723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.140749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.140888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.141007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.141032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.141147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.141275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.141302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.141441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.141586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.141612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.141742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.141904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.141931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.142094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.142315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.142342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.142482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.142600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.142628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.142770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.142906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.142932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.143082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.143201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.143232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.143397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.143515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.143542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.143681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.143901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.143928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.144065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.144179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.144206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.144333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.144450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.144481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.144646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.144781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.144807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.144940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.145069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.145096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.145263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.145378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.145405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.145521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.145652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.145679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.145800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.145938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.145965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.146098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.146256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.146283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.146411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.146570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.146597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.146736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.146879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.146906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.147046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.147220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.147247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.147386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.147549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.147579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.147693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.147831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.147857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.147992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.148150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.148176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.148355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.148499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.148527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.148671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.148812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.148839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.294 [2024-05-15 15:53:16.148949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.149087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.294 [2024-05-15 15:53:16.149114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.294 qpair failed and we were unable to recover it. 00:35:03.295 [2024-05-15 15:53:16.149254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.149416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.149443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.295 qpair failed and we were unable to recover it. 00:35:03.295 [2024-05-15 15:53:16.149573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.149715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.149742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.295 qpair failed and we were unable to recover it. 00:35:03.295 [2024-05-15 15:53:16.149885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.150019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.150045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.295 qpair failed and we were unable to recover it. 00:35:03.295 [2024-05-15 15:53:16.150199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.150319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.150346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.295 qpair failed and we were unable to recover it. 00:35:03.295 [2024-05-15 15:53:16.150500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.150611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.150643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.295 qpair failed and we were unable to recover it. 00:35:03.295 [2024-05-15 15:53:16.150799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.150948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.150976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.295 qpair failed and we were unable to recover it. 00:35:03.295 [2024-05-15 15:53:16.151100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.151245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.151273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.295 qpair failed and we were unable to recover it. 00:35:03.295 [2024-05-15 15:53:16.151412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.151533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.151560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.295 qpair failed and we were unable to recover it. 00:35:03.295 [2024-05-15 15:53:16.151726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.151890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.151917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.295 qpair failed and we were unable to recover it. 00:35:03.295 [2024-05-15 15:53:16.152085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.152227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.152258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.295 qpair failed and we were unable to recover it. 00:35:03.295 [2024-05-15 15:53:16.152424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.152543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.152570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.295 qpair failed and we were unable to recover it. 00:35:03.295 [2024-05-15 15:53:16.152712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.152877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.152905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.295 qpair failed and we were unable to recover it. 00:35:03.295 [2024-05-15 15:53:16.153044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.153184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.153211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.295 qpair failed and we were unable to recover it. 00:35:03.295 [2024-05-15 15:53:16.153336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.153475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.153501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.295 qpair failed and we were unable to recover it. 00:35:03.295 [2024-05-15 15:53:16.153644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.153783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.153815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.295 qpair failed and we were unable to recover it. 00:35:03.295 [2024-05-15 15:53:16.153984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.154101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.154128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.295 qpair failed and we were unable to recover it. 00:35:03.295 [2024-05-15 15:53:16.154283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.154423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.154451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.295 qpair failed and we were unable to recover it. 00:35:03.295 [2024-05-15 15:53:16.154592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.154757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.154784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.295 qpair failed and we were unable to recover it. 00:35:03.295 [2024-05-15 15:53:16.154923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.155062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.155088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.295 qpair failed and we were unable to recover it. 00:35:03.295 [2024-05-15 15:53:16.155228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.155391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.155417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.295 qpair failed and we were unable to recover it. 00:35:03.295 [2024-05-15 15:53:16.155580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.155695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.155721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.295 qpair failed and we were unable to recover it. 00:35:03.295 [2024-05-15 15:53:16.155861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.155967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.155993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.295 qpair failed and we were unable to recover it. 00:35:03.295 [2024-05-15 15:53:16.156125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.156268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.156296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.295 qpair failed and we were unable to recover it. 00:35:03.295 [2024-05-15 15:53:16.156437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.295 [2024-05-15 15:53:16.156576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.156603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.296 qpair failed and we were unable to recover it. 00:35:03.296 [2024-05-15 15:53:16.156767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.156884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.156910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.296 qpair failed and we were unable to recover it. 00:35:03.296 [2024-05-15 15:53:16.157069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.157237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.157264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.296 qpair failed and we were unable to recover it. 00:35:03.296 [2024-05-15 15:53:16.157408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.157552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.157579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.296 qpair failed and we were unable to recover it. 00:35:03.296 [2024-05-15 15:53:16.157754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.157884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.157910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.296 qpair failed and we were unable to recover it. 00:35:03.296 [2024-05-15 15:53:16.158053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.158197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.158228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.296 qpair failed and we were unable to recover it. 00:35:03.296 [2024-05-15 15:53:16.158372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.158509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.158536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.296 qpair failed and we were unable to recover it. 00:35:03.296 [2024-05-15 15:53:16.158657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.158808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.158834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.296 qpair failed and we were unable to recover it. 00:35:03.296 [2024-05-15 15:53:16.158986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.159128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.159155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.296 qpair failed and we were unable to recover it. 00:35:03.296 [2024-05-15 15:53:16.159324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.159485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.159512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.296 qpair failed and we were unable to recover it. 00:35:03.296 [2024-05-15 15:53:16.159653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.159791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.159818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.296 qpair failed and we were unable to recover it. 00:35:03.296 [2024-05-15 15:53:16.159992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.160136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.160163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.296 qpair failed and we were unable to recover it. 00:35:03.296 [2024-05-15 15:53:16.160310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.160428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.160455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.296 qpair failed and we were unable to recover it. 00:35:03.296 [2024-05-15 15:53:16.160594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.160736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.160763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.296 qpair failed and we were unable to recover it. 00:35:03.296 [2024-05-15 15:53:16.160926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.161064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.161090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.296 qpair failed and we were unable to recover it. 00:35:03.296 [2024-05-15 15:53:16.161247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.161412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.161439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.296 qpair failed and we were unable to recover it. 00:35:03.296 [2024-05-15 15:53:16.161582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.161722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.161748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.296 qpair failed and we were unable to recover it. 00:35:03.296 [2024-05-15 15:53:16.161886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.162048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.162076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.296 qpair failed and we were unable to recover it. 00:35:03.296 [2024-05-15 15:53:16.162225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.162369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.162397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.296 qpair failed and we were unable to recover it. 00:35:03.296 [2024-05-15 15:53:16.162563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.162674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.162701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.296 qpair failed and we were unable to recover it. 00:35:03.296 [2024-05-15 15:53:16.162816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.162956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.162984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.296 qpair failed and we were unable to recover it. 00:35:03.296 [2024-05-15 15:53:16.163154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.163299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.163326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.296 qpair failed and we were unable to recover it. 00:35:03.296 [2024-05-15 15:53:16.163473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.163596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.163623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.296 qpair failed and we were unable to recover it. 00:35:03.296 [2024-05-15 15:53:16.163777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.163944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.296 [2024-05-15 15:53:16.163971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.296 qpair failed and we were unable to recover it. 00:35:03.296 [2024-05-15 15:53:16.164109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.164227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.164259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.164392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.164558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.164584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.164740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.164890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.164916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.165056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.165169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.165194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.165349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.165484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.165511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.165643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.165760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.165786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.165925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.166064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.166090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.166237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.166408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.166435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.166585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.166727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.166754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.166895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.167010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.167038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.167204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.167329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.167356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.167525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.167658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.167684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.167798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.167916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.167943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.168083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.168229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.168260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.168373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.168509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.168536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.168697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.168832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.168859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.168985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.169105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.169132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.169307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.169480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.169506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.169679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.169824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.169851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.169966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.170101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.170127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.170248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.170384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.170411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.170524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.170627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.170654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.170788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.170951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.170977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.171095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.171225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.171252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.171392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.171543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.171570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.171728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.171894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.171920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.172038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.172191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.172225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.172400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.172536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.172563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.172688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.172812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.172840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.172948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.173087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.173114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.297 qpair failed and we were unable to recover it. 00:35:03.297 [2024-05-15 15:53:16.173259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.297 [2024-05-15 15:53:16.173377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.173404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.173518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.173647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.173674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.173816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.173969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.173995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.174164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.174297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.174324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.174463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.174603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.174630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.174796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.174963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.174988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.175143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.175260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.175288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.175434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.175577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.175604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.175726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.175871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.175897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.176019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.176159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.176185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.176341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.176480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.176506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.176623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.176744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.176771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.176913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.177051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.177077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.177225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.177354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.177381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.177535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.177650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.177678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.177789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.177929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.177956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.178093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.178227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.178253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.178386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.178523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.178550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.178704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.178858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.178885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.178994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.179136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.179163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.179280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.179450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.179476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.179639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.179794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.179820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.179971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.180093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.180120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.180283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.180438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.180464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.180629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.180770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.180796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.180931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.181100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.181126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.181242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.181396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.181422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.181598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.181769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.181796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.298 [2024-05-15 15:53:16.181938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.182087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.298 [2024-05-15 15:53:16.182114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.298 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.182266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.182373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.182399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.182509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.182651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.182677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.182816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.182951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.182978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.183119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.183270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.183297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.183442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.183583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.183610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.183763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.183917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.183944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.184080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.184183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.184210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.184342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.184490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.184516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.184655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.184766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.184794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.184935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.185084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.185111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.185276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.185398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.185425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.185594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.185714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.185741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.185875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.186038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.186065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.186231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.186408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.186434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.186612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.186723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.186749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.186868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.187010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.187037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.187166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.187307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.187334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.187456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.187607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.187634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.187750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.187885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.187912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.188076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.188232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.188273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.188438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.188575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.188602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.188712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.188816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.188843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.188968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.189107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.189135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.189272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.189437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.189463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.189607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.189722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.189750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.189894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.190035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.190062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.190232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.190386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.190412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.299 [2024-05-15 15:53:16.190537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.190679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.299 [2024-05-15 15:53:16.190706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.299 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.190846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.190957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.190984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.300 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.191149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.191308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.191335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.300 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.191473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.191617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.191644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.300 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.191789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.191928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.191955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.300 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.192089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.192204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.192242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.300 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.192373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.192537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.192565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.300 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.192674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.192835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.192862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.300 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.193027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.193166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.193193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.300 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.193340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.193471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.193502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.300 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.193643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.193778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.193804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.300 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.193944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.194058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.194084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.300 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.194227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.194381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.194412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.300 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.194553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.194687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.194715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.300 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.194859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.194973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.194999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.300 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.195169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.195285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.195312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.300 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.195453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.195588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.195614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.300 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.195756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.195875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.195902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.300 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.196015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.196122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.196150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.300 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.196293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.196431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.196458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.300 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.196622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.196762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.196788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.300 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.196924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.197066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.197093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.300 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.197237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.197382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.197413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.300 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.197559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.197707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.197745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.300 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.197860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.198028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.300 [2024-05-15 15:53:16.198054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.300 qpair failed and we were unable to recover it. 00:35:03.300 [2024-05-15 15:53:16.198164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.198327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.198354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.198495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.198641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.198668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.198782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.198942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.198970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.199090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.199228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.199256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.199380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.199502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.199530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.199677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.199825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.199851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.199964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.200130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.200157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.200331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.200451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.200494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.200607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.200747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.200773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.200914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.201030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.201057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.201197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.201349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.201376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.201546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.201686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.201713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.201850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.202016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.202042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.202206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.202366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.202392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.202547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.202662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.202689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.202803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.202970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.202997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.203121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.203229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.203268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.203374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.203485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.203512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.203682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.203787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.203814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.203981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.204122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.204149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.204307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.204453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.204490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.204598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.204727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.204756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.204867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.205010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.205038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.205204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.205382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.205408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.205547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.205712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.205738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.205850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.205989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.206015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.206151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.206281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.206308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.206456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.206601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.206628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.206771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.206943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.206970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.207111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.207225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.207253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.301 qpair failed and we were unable to recover it. 00:35:03.301 [2024-05-15 15:53:16.207394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.301 [2024-05-15 15:53:16.207534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.207562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.207706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.207822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.207848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.207986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.208129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.208155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.208331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.208477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.208504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.208640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.208808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.208834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.208975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.209137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.209163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.209302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.209412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.209438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.209573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.209703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.209729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.209868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.209976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.210002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.210137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.210302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.210329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.210451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.210568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.210594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.210698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.210858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.210884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.211024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.211188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.211220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.211366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.211477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.211505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.211646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.211810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.211837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.211976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.212139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.212165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.212304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.212474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.212501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.212640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.212789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.212816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.212964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.213103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.213131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.213253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.213417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.213444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.213582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.213768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.213795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.213935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.214077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.214103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.214220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.214361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.214388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.214519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.214683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.214709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.214844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.214946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.214983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.215134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.215247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.215274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.215433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.215568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.215594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.215716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.215826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.215853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.215966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.216098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.216126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.216285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.216431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.216459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.216579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.216742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.216770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.302 qpair failed and we were unable to recover it. 00:35:03.302 [2024-05-15 15:53:16.216906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.217020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.302 [2024-05-15 15:53:16.217047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.217222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.217344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.217372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.217513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.217624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.217652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.217769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.217905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.217931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.218070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.218241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.218269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.218408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.218575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.218601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.218738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.218873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.218900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.219040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.219203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.219234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.219374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.219511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.219538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.219673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.219814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.219840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.219980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.220123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.220149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.220290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.220444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.220471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.220587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.220727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.220754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.220918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.221026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.221052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.221183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.221321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.221350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.221463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.221608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.221634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.221749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.221894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.221920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.222041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.222159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.222185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.222310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.222444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.222470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.222577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.222710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.222736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.222851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.222970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.222998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.223133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.223302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.223329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.223438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.223578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.223606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.223725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.223859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.223885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.223998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.224113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.224147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.224325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.224495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.224522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.224639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.224781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.224809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.224958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.225100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.225127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.225270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.225387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.225414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.225548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.225676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.225702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.225864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.225981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.226008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.303 qpair failed and we were unable to recover it. 00:35:03.303 [2024-05-15 15:53:16.226151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.303 [2024-05-15 15:53:16.226314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.226341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.226462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.226622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.226649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.226762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.226899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.226925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.227066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.227193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.227226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.227370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.227532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.227558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.227691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.227833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.227860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.228031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.228172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.228199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.228323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.228454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.228481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.228615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.228784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.228810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.228952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.229069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.229097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.229261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.229369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.229396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.229547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.229684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.229712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.229857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.229965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.229993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.230137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.230259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.230287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.230397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.230507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.230533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.230673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.230811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.230837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.230949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.231063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.231090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.231228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.231341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.231368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.231503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.231635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.231661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.231822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.231963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.231990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.232101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.232246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.232275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.232411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.232575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.232601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.232742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.232882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.232910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.233025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.233200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.233232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.233376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.233511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.233538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.233648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.233782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.233808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.304 [2024-05-15 15:53:16.233978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.234092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.304 [2024-05-15 15:53:16.234118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.304 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.234264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.234429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.234455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.305 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.234598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.234740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.234766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.305 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.234885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.235030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.235056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.305 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.235225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.235355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.235381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.305 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.235522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.235660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.235687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.305 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.235854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.235995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.236021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.305 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.236167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.236316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.236344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.305 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.236463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.236634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.236661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.305 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.236776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.236915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.236942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.305 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.237107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.237234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.237261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.305 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.237405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.237545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.237573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.305 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.237686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.237799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.237827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.305 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.237984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.238090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.238117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.305 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.238238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.238376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.238403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.305 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.238552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.238686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.238713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.305 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.238877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.238992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.239018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.305 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.239158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.239296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.239323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.305 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.239466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.239605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.239633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.305 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.239752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.239920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.239946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.305 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.240087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.240235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.240273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.305 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.240413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.240532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.240560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.305 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.240701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.240848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.240875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.305 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.241043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.241181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.241207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.305 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.241326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.241486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.305 [2024-05-15 15:53:16.241513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.305 qpair failed and we were unable to recover it. 00:35:03.305 [2024-05-15 15:53:16.241627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.241791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.241818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.241942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.242083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.242110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.242220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.242332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.242359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.242502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.242651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.242678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.242844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.242980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.243006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.243174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.243315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.243346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.243478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.243617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.243644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.243778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.243938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.243965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.244108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.244230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.244261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.244401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.244545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.244571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.244683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.244826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.244853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.244969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.245104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.245131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.245271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.245408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.245435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.245577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.245715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.245742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.245881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.245993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.246020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.246142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.246286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.246317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.246454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.246596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.246623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.246764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.246930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.246957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.247098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.247243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.247270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.247382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.247539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.247566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.247707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.247848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.247874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.248039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.248177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.248203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.248354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.248497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.248523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.248639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.248744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.248771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.248915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.249056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.249083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.249242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.249379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.249410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.249533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.249692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.249718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.249849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.249986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.250012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.250147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.250270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.250297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.250434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.250573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.250600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.306 qpair failed and we were unable to recover it. 00:35:03.306 [2024-05-15 15:53:16.250762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.250900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.306 [2024-05-15 15:53:16.250927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.307 qpair failed and we were unable to recover it. 00:35:03.307 [2024-05-15 15:53:16.251041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.251212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.251243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.307 qpair failed and we were unable to recover it. 00:35:03.307 [2024-05-15 15:53:16.251407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.251512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.251539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.307 qpair failed and we were unable to recover it. 00:35:03.307 [2024-05-15 15:53:16.251676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.251815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.251841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.307 qpair failed and we were unable to recover it. 00:35:03.307 [2024-05-15 15:53:16.252007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.252149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.252177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.307 qpair failed and we were unable to recover it. 00:35:03.307 [2024-05-15 15:53:16.252352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.252498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.252526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.307 qpair failed and we were unable to recover it. 00:35:03.307 [2024-05-15 15:53:16.252673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.252834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.252860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.307 qpair failed and we were unable to recover it. 00:35:03.307 [2024-05-15 15:53:16.253025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.253167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.253194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:03.307 qpair failed and we were unable to recover it. 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Write completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Write completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Write completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Write completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Write completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Write completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 Read completed with error (sct=0, sc=8) 00:35:03.307 starting I/O failed 00:35:03.307 [2024-05-15 15:53:16.253587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:03.307 [2024-05-15 15:53:16.253745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.253946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.253986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1438000b90 with addr=10.0.0.2, port=4420 00:35:03.307 qpair failed and we were unable to recover it. 00:35:03.307 [2024-05-15 15:53:16.254152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.254337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.254374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1438000b90 with addr=10.0.0.2, port=4420 00:35:03.307 qpair failed and we were unable to recover it. 00:35:03.307 [2024-05-15 15:53:16.254511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.254650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.254693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1438000b90 with addr=10.0.0.2, port=4420 00:35:03.307 qpair failed and we were unable to recover it. 00:35:03.307 [2024-05-15 15:53:16.254859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.255046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.255081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1438000b90 with addr=10.0.0.2, port=4420 00:35:03.307 qpair failed and we were unable to recover it. 00:35:03.307 [2024-05-15 15:53:16.255273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.255430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.255466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1438000b90 with addr=10.0.0.2, port=4420 00:35:03.307 qpair failed and we were unable to recover it. 00:35:03.307 [2024-05-15 15:53:16.255612] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb31970 is same with the state(5) to be set 00:35:03.307 [2024-05-15 15:53:16.255792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.255914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.255941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.307 qpair failed and we were unable to recover it. 00:35:03.307 [2024-05-15 15:53:16.256057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.256228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.256255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.307 qpair failed and we were unable to recover it. 00:35:03.307 [2024-05-15 15:53:16.256400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.256542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.256568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.307 qpair failed and we were unable to recover it. 00:35:03.307 [2024-05-15 15:53:16.256709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.256866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.256893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.307 qpair failed and we were unable to recover it. 00:35:03.307 [2024-05-15 15:53:16.257054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.257208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.257240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.307 qpair failed and we were unable to recover it. 00:35:03.307 [2024-05-15 15:53:16.257395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.257540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.257568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.307 qpair failed and we were unable to recover it. 00:35:03.307 [2024-05-15 15:53:16.257730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.257837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.257863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.307 qpair failed and we were unable to recover it. 00:35:03.307 [2024-05-15 15:53:16.258025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.258144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.307 [2024-05-15 15:53:16.258175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.307 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.258296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.258455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.258481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.258656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.258798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.258824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.258987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.259127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.259153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.259272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.259380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.259407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.259509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.259639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.259664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.259806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.259948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.259974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.260086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.260242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.260269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.260385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.260539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.260565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.260704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.260842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.260867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.260987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.261128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.261159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.261325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.261440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.261466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.261606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.261745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.261771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.261920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.262036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.262062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.262203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.262355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.262382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.262551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.262691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.262717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.262880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.263040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.263066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.263209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.263369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.263395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.263528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.263699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.263724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.263839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.264011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.264037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.264154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.264278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.264306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.264450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.264564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.264590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.264724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.264889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.264915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.265040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.265177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.265203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.265346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.265465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.265491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.265647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.265790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.265816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.265955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.266071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.266097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.266233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.266377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.266403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.266515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.266680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.266706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.266870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.267024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.267050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.267211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.267356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.267382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.267504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.267641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.267667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.267788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.267902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.267930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.308 qpair failed and we were unable to recover it. 00:35:03.308 [2024-05-15 15:53:16.268071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.268212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.308 [2024-05-15 15:53:16.268247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.268393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.268525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.268551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.268691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.268831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.268857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.268996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.269136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.269163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.269297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.269442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.269468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.269603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.269749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.269775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.269917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.270051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.270077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.270221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.270337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.270363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.270526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.270684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.270710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.270855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.270991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.271017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.271178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.271301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.271329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.271466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.271575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.271602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.271711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.271823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.271848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.271959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.272098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.272125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.272277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.272392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.272419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.272556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.272705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.272730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.272838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.272947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.272973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.273137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.273265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.273292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.273429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.273597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.273627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.273790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.273930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.273956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.274118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.274258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.274285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.274397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.274549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.274575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.274730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.274868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.274895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.275009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.275119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.275145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.275281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.275421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.275447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.275565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.275684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.275710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.309 qpair failed and we were unable to recover it. 00:35:03.309 [2024-05-15 15:53:16.275826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.309 [2024-05-15 15:53:16.275968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.275994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.276137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.276256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.276283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.276421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.276539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.276565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.276711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.276851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.276877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.277021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.277160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.277186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.277331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.277464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.277491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.277630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.277769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.277795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.277930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.278047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.278072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.278227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.278393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.278420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.278530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.278653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.278678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.278817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.278976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.279002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.279143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.279308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.279334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.279446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.279608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.279634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.279794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.279924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.279950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.280092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.280233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.280259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.280375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.280476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.280502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.280643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.280785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.280812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.280976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.281115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.281142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.281257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.281394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.281420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.281526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.281631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.281657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.281772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.281904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.281930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.282073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.282212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.282243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.282355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.282486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.282511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.282657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.282800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.282826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.282942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.283059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.283085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.283227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.283369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.283397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.283538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.283678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.283704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.283841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.283979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.284005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.284146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.284314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.284341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.284482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.284594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.284619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.284763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.284872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.284898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.310 [2024-05-15 15:53:16.285035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.285167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.310 [2024-05-15 15:53:16.285193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.310 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.285325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.285468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.285495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.285606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.285720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.285747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.285905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.286050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.286076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.286241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.286379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.286406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.286549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.286689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.286716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.286890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.287004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.287030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.287150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.287290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.287317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.287491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.287635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.287661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.287803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.287914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.287941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.288082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.288192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.288223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.288342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.288452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.288478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.288583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.288743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.288772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.288914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.289034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.289061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.289203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.289371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.289398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.289538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.289701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.289728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.289852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.289991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.290018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.290136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.290271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.290298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.290425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.290586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.290611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.290753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.290891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.290917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.291024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.291183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.291209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.291345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.291486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.291512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.291674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.291840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.291866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.292018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.292158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.292186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.292333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.292474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.292500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.292641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.292755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.292781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.292922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.293031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.293057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.293165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.293273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.293299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.293413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.293531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.293558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.311 qpair failed and we were unable to recover it. 00:35:03.311 [2024-05-15 15:53:16.293700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.311 [2024-05-15 15:53:16.293839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.293865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.294016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.294157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.294182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.294328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.294435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.294461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.294610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.294723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.294749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.294868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.295040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.295066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.295209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.295346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.295373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.295489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.295653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.295679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.295782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.295943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.295969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.296108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.296224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.296251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.296390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.296503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.296530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.296661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.296770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.296796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.296970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.297103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.297129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.297271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.297384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.297410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.297523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.297639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.297665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.297837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.297987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.298013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.298137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.298302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.298329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.298495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.298656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.298682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.298824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.298940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.298967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.299131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.299294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.299321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.299435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.299545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.299571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.299736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.299878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.299903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.300018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.300122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.300148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.300281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.300446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.300472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.300641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.300778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.300804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.300957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.301096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.301125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.301299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.301440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.301466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.301573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.301693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.301719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.301840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.301979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.302004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.302145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.302257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.302284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.312 qpair failed and we were unable to recover it. 00:35:03.312 [2024-05-15 15:53:16.302426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.312 [2024-05-15 15:53:16.302587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.302613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.302755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.302891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.302917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.303084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.303226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.303252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.303403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.303545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.303571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.303687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.303833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.303858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.303995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.304134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.304166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.304331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.304450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.304476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.304637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.304779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.304805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.304928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.305064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.305091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.305199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.305361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.305388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.305505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.305666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.305692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.305830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.305956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.305982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.306120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.306252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.306279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.306416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.306577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.306602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.306741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.306855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.306882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.307024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.307138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.307164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.307307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.307469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.307495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.307659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.307800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.307826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.307940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.308075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.308101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.308240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.308350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.308377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.308481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.308601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.308628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.308761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.308902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.308928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.309046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.309186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.309212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.309360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.309470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.309496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.309604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.309705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.309731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.309873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.310011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.310037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.310177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.310344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.310371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.313 qpair failed and we were unable to recover it. 00:35:03.313 [2024-05-15 15:53:16.310509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.313 [2024-05-15 15:53:16.310646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.310672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.310807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.310978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.311003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.311170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.311322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.311349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.311459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.311582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.311608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.311749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.311917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.311943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.312054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.312195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.312227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.312339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.312477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.312503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.312644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.312780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.312805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.312914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.313064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.313089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.313251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.313397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.313423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.313587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.313724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.313750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.313893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.314028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.314054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.314219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.314379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.314405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.314546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.314656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.314682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.314850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.314999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.315025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.315163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.315294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.315321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.315436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.315574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.315600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.315761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.315923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.315949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.316081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.316224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.316250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.316413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.316576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.316606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.316773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.316891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.316917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.317081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.317184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.317210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.317354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.317472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.317498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.317634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.317772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.317798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.317935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.318085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.318111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.318230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.318344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.318372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.318515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.318678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.318704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.318844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.318956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.318982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.319103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.319270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.319297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.314 qpair failed and we were unable to recover it. 00:35:03.314 [2024-05-15 15:53:16.319434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.319580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.314 [2024-05-15 15:53:16.319606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.319724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.319867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.319893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.320029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.320167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.320193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.320339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.320462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.320488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.320596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.320755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.320781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.320889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.321007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.321040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.321183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.321315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.321342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.321485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.321619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.321645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.321784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.321950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.321976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.322109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.322249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.322277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.322415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.322577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.322603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.322776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.322910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.322936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.323084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.323197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.323228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.323404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.323531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.323558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.323693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.323840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.323866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.324006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.324119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.324146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.324267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.324372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.324398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.324539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.324679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.324705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.324868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.325001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.325028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.325201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.325359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.325387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.325526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.325679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.325705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.325819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.325991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.326017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.326131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.326271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.326298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.326438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.326586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.326612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.326773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.326907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.326933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.327073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.327208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.327239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.327353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.327495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.327523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.327689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.327802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.327829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.327967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.328085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.328111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.315 [2024-05-15 15:53:16.328304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.328446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.315 [2024-05-15 15:53:16.328473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.315 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.328590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.328755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.328780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.328897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.329041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.329068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.329205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.329368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.329395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.329555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.329695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.329721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.329900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.330011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.330037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.330154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.330289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.330316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.330463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.330574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.330600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.330739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.330873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.330899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.331023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.331187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.331213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.331333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.331475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.331501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.331634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.331773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.331799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.331916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.332090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.332120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.332267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.332408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.332434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.332577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.332694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.332721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.332865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.332993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.333019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.333130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.333244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.333271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.333395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.333508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.333534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.333669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.333827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.333853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.333967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.334086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.334112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.334252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.334418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.334444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.334587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.334715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.334741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.334882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.334998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.335024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.335165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.335329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.335356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.335490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.335660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.335687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.335828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.335965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.335992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.336132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.336284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.336310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.336420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.336566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.336592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.336733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.336887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.336913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.337057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.337198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.337238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.337352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.337496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.337523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.337666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.337807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.316 [2024-05-15 15:53:16.337833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.316 qpair failed and we were unable to recover it. 00:35:03.316 [2024-05-15 15:53:16.337977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.338142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.338169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.338320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.338442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.338468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.338632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.338773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.338799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.338962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.339075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.339101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.339240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.339352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.339378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.339490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.339600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.339628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.339767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.339911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.339937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.340074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.340247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.340274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.340423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.340543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.340569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.340682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.340822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.340848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.341015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.341120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.341146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.341298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.341410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.341436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.341552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.341655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.341681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.341816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.341925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.341951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.342066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.342227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.342253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.342373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.342517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.342543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.342659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.342777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.342804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.342924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.343080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.343107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.343271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.343405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.343431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.343567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.343675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.343701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.343862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.344026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.344052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.344164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.344278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.344310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.344460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.344591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.344617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.344766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.344910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.344936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.345075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.345227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.345255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.345423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.345539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.345564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.345682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.345798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.345824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.345939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.346084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.346110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.346272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.346423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.346449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.317 [2024-05-15 15:53:16.346586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.346725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.317 [2024-05-15 15:53:16.346751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.317 qpair failed and we were unable to recover it. 00:35:03.318 [2024-05-15 15:53:16.346864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.346977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.347003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.318 qpair failed and we were unable to recover it. 00:35:03.318 [2024-05-15 15:53:16.347144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.347259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.347290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.318 qpair failed and we were unable to recover it. 00:35:03.318 [2024-05-15 15:53:16.347401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.347508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.347538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.318 qpair failed and we were unable to recover it. 00:35:03.318 [2024-05-15 15:53:16.347667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.347806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.347832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.318 qpair failed and we were unable to recover it. 00:35:03.318 [2024-05-15 15:53:16.347965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.348127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.348153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.318 qpair failed and we were unable to recover it. 00:35:03.318 [2024-05-15 15:53:16.348281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.348425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.348451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.318 qpair failed and we were unable to recover it. 00:35:03.318 [2024-05-15 15:53:16.348595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.348705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.348731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.318 qpair failed and we were unable to recover it. 00:35:03.318 [2024-05-15 15:53:16.348905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.349045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.349071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.318 qpair failed and we were unable to recover it. 00:35:03.318 [2024-05-15 15:53:16.349186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.349308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.349335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.318 qpair failed and we were unable to recover it. 00:35:03.318 [2024-05-15 15:53:16.349505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.349670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.349695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.318 qpair failed and we were unable to recover it. 00:35:03.318 [2024-05-15 15:53:16.349839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.350004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.350030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.318 qpair failed and we were unable to recover it. 00:35:03.318 [2024-05-15 15:53:16.350170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.350290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.350316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.318 qpair failed and we were unable to recover it. 00:35:03.318 [2024-05-15 15:53:16.350460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.350605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.350632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.318 qpair failed and we were unable to recover it. 00:35:03.318 [2024-05-15 15:53:16.350802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.350931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.350957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.318 qpair failed and we were unable to recover it. 00:35:03.318 [2024-05-15 15:53:16.351123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.351264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.351292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.318 qpair failed and we were unable to recover it. 00:35:03.318 [2024-05-15 15:53:16.351430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.351538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.351564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.318 qpair failed and we were unable to recover it. 00:35:03.318 [2024-05-15 15:53:16.351700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.351816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.351841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.318 qpair failed and we were unable to recover it. 00:35:03.318 [2024-05-15 15:53:16.351977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.352119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.352145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.318 qpair failed and we were unable to recover it. 00:35:03.318 [2024-05-15 15:53:16.352264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.352403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.352431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.318 qpair failed and we were unable to recover it. 00:35:03.318 [2024-05-15 15:53:16.352545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.352661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.352687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.318 qpair failed and we were unable to recover it. 00:35:03.318 [2024-05-15 15:53:16.352851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.352963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.352989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.318 qpair failed and we were unable to recover it. 00:35:03.318 [2024-05-15 15:53:16.353134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.353273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.353300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.318 qpair failed and we were unable to recover it. 00:35:03.318 [2024-05-15 15:53:16.353418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.318 [2024-05-15 15:53:16.353554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.353580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.353693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.353801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.353827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.353943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.354109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.354135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.354289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.354428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.354454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.354586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.354736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.354763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.354907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.355019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.355045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.355222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.355364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.355390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.355511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.355651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.355677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.355794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.355933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.355960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.356073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.356214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.356246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.356364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.356484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.356511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.356655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.356761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.356787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.356916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.357057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.357085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.357228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.357366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.357392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.357537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.357679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.357705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.357822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.357961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.357988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.358101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.358245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.358271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.358410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.358546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.358572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.358717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.358878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.358905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.359066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.359206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.359237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.359379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.359521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.359551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.359665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.359776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.359802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.359915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.360051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.360077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.360227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.360358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.360384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.360498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.360602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.360627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.360740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.360849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.360875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.360990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.361129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.361155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.361322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.361440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.361466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.361634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.361744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.361770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.361904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.362042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.362070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.362222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.362385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.362411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.362552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.362694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.319 [2024-05-15 15:53:16.362720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.319 qpair failed and we were unable to recover it. 00:35:03.319 [2024-05-15 15:53:16.362831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.362961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.362987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.320 qpair failed and we were unable to recover it. 00:35:03.320 [2024-05-15 15:53:16.363147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.363288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.363314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.320 qpair failed and we were unable to recover it. 00:35:03.320 [2024-05-15 15:53:16.363464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.363597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.363623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.320 qpair failed and we were unable to recover it. 00:35:03.320 [2024-05-15 15:53:16.363768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.363904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.363930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.320 qpair failed and we were unable to recover it. 00:35:03.320 [2024-05-15 15:53:16.364096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.364240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.364266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.320 qpair failed and we were unable to recover it. 00:35:03.320 [2024-05-15 15:53:16.364426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.364591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.364617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.320 qpair failed and we were unable to recover it. 00:35:03.320 [2024-05-15 15:53:16.364732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.364899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.364926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.320 qpair failed and we were unable to recover it. 00:35:03.320 [2024-05-15 15:53:16.365074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.365238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.365265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.320 qpair failed and we were unable to recover it. 00:35:03.320 [2024-05-15 15:53:16.365408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.365528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.365554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.320 qpair failed and we were unable to recover it. 00:35:03.320 [2024-05-15 15:53:16.365722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.365831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.365858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.320 qpair failed and we were unable to recover it. 00:35:03.320 [2024-05-15 15:53:16.365969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.366105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.366131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.320 qpair failed and we were unable to recover it. 00:35:03.320 [2024-05-15 15:53:16.366251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.366389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.366416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.320 qpair failed and we were unable to recover it. 00:35:03.320 [2024-05-15 15:53:16.366579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.366697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.366727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.320 qpair failed and we were unable to recover it. 00:35:03.320 [2024-05-15 15:53:16.366884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.367027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.367053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.320 qpair failed and we were unable to recover it. 00:35:03.320 [2024-05-15 15:53:16.367190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.367314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.367341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.320 qpair failed and we were unable to recover it. 00:35:03.320 [2024-05-15 15:53:16.367479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.367611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.367637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.320 qpair failed and we were unable to recover it. 00:35:03.320 [2024-05-15 15:53:16.367775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.367911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.367937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.320 qpair failed and we were unable to recover it. 00:35:03.320 [2024-05-15 15:53:16.368044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.368181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.368206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.320 qpair failed and we were unable to recover it. 00:35:03.320 [2024-05-15 15:53:16.368380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.368508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.368534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.320 qpair failed and we were unable to recover it. 00:35:03.320 [2024-05-15 15:53:16.368677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.368800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.320 [2024-05-15 15:53:16.368826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.320 qpair failed and we were unable to recover it. 00:35:03.599 [2024-05-15 15:53:16.368943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.369089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.369115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.599 qpair failed and we were unable to recover it. 00:35:03.599 [2024-05-15 15:53:16.369257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.369381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.369408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.599 qpair failed and we were unable to recover it. 00:35:03.599 [2024-05-15 15:53:16.369556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.369670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.369697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.599 qpair failed and we were unable to recover it. 00:35:03.599 [2024-05-15 15:53:16.369821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.369936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.369962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.599 qpair failed and we were unable to recover it. 00:35:03.599 [2024-05-15 15:53:16.370091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.370209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.370241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.599 qpair failed and we were unable to recover it. 00:35:03.599 [2024-05-15 15:53:16.370360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.370483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.370510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.599 qpair failed and we were unable to recover it. 00:35:03.599 [2024-05-15 15:53:16.370626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.370744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.370770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.599 qpair failed and we were unable to recover it. 00:35:03.599 [2024-05-15 15:53:16.370890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.370999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.371026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.599 qpair failed and we were unable to recover it. 00:35:03.599 [2024-05-15 15:53:16.371192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.371330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.371355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.599 qpair failed and we were unable to recover it. 00:35:03.599 [2024-05-15 15:53:16.371491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.371606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.371634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.599 qpair failed and we were unable to recover it. 00:35:03.599 [2024-05-15 15:53:16.371755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.371871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.371899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.599 qpair failed and we were unable to recover it. 00:35:03.599 [2024-05-15 15:53:16.372040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.372179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.372207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.599 qpair failed and we were unable to recover it. 00:35:03.599 [2024-05-15 15:53:16.372338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.372455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.372481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.599 qpair failed and we were unable to recover it. 00:35:03.599 [2024-05-15 15:53:16.372599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.372739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.372765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.599 qpair failed and we were unable to recover it. 00:35:03.599 [2024-05-15 15:53:16.372893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.373059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.373086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.599 qpair failed and we were unable to recover it. 00:35:03.599 [2024-05-15 15:53:16.373230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.373371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.373398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.599 qpair failed and we were unable to recover it. 00:35:03.599 [2024-05-15 15:53:16.373542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.373659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.373686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.599 qpair failed and we were unable to recover it. 00:35:03.599 [2024-05-15 15:53:16.373814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.373943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.373969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.599 qpair failed and we were unable to recover it. 00:35:03.599 [2024-05-15 15:53:16.374113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.374235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.374262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.599 qpair failed and we were unable to recover it. 00:35:03.599 [2024-05-15 15:53:16.374422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.374527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.374557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.599 qpair failed and we were unable to recover it. 00:35:03.599 [2024-05-15 15:53:16.374705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.374871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.374898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.599 qpair failed and we were unable to recover it. 00:35:03.599 [2024-05-15 15:53:16.375039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.599 [2024-05-15 15:53:16.375151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.375182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.375339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.375484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.375510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.375626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.375766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.375793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.375908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.376052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.376079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.376222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.376341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.376368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.376535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.376691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.376718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.376834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.376996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.377023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.377134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.377275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.377303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.377426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.377540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.377571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.377730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.377847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.377876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.378003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.378167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.378193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.378317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.378458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.378485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.378624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.378740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.378767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.378911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.379063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.379089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.379230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.379343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.379370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.379509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.379661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.379690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.379806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.379912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.379942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.380087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.380256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.380283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.380396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.380541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.380572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.380711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.380854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.380881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.381022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.381159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.381186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.381315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.381462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.381491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.381629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.381770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.381796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.381911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.382069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.382095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.382227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.382338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.382364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.382509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.382625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.382650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.382751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.382900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.382926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.383032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.383169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.383194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.383337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.383490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.383519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.383638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.383819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.600 [2024-05-15 15:53:16.383846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.600 qpair failed and we were unable to recover it. 00:35:03.600 [2024-05-15 15:53:16.383986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.384098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.384124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.384253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.384395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.384421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.384537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.384646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.384673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.384808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.384946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.384972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.385113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.385277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.385307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.385449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.385566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.385593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.385714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.385850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.385876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.386010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.386143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.386169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.386303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.386417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.386444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.386552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.386692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.386720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.386846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.386984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.387012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.387160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.387301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.387329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.387454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.387587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.387613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.387775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.387919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.387946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.388066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.388206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.388240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.388355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.388495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.388521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.388634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.388769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.388795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.388933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.389098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.389124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.389245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.389387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.389415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.389555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.389703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.389730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.389840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.389976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.390003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.390144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.390257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.390284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.390451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.390571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.390598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.390737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.390869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.390895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.391036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.391161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.391187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.391320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.391455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.391482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.391588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.391694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.391719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.391856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.391963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.391989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.601 [2024-05-15 15:53:16.392107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.392273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.601 [2024-05-15 15:53:16.392300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.601 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.392464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.392585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.392612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.392721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.392863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.392891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.393063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.393207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.393242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.393356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.393495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.393521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.393661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.393797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.393824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.393949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.394097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.394125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.394262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.394400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.394426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.394591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.394724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.394751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.394856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.394987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.395013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.395151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.395292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.395319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.395453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.395566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.395597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.395705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.395850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.395875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.396053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.396194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.396226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.396343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.396460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.396486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.396620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.396762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.396789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.396932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.397047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.397073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.397197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.397332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.397359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.397498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.397633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.397659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.397795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.397909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.397936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.398048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.398225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.398253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.398396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.398536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.398567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.398708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.398871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.398897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.399059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.399199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.399233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.399374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.399516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.399542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.399707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.399875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.399901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.400019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.400129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.400155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.400291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.400418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.400445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.400580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.400724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.400750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.400891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.401006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.401033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.602 qpair failed and we were unable to recover it. 00:35:03.602 [2024-05-15 15:53:16.401169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.602 [2024-05-15 15:53:16.401286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.401313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.603 qpair failed and we were unable to recover it. 00:35:03.603 [2024-05-15 15:53:16.401425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.401565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.401596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.603 qpair failed and we were unable to recover it. 00:35:03.603 [2024-05-15 15:53:16.401734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.401850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.401877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.603 qpair failed and we were unable to recover it. 00:35:03.603 [2024-05-15 15:53:16.401992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.402134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.402160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.603 qpair failed and we were unable to recover it. 00:35:03.603 [2024-05-15 15:53:16.402279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.402393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.402419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.603 qpair failed and we were unable to recover it. 00:35:03.603 [2024-05-15 15:53:16.402558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.402726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.402752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.603 qpair failed and we were unable to recover it. 00:35:03.603 [2024-05-15 15:53:16.402898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.403040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.403066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.603 qpair failed and we were unable to recover it. 00:35:03.603 [2024-05-15 15:53:16.403205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.403341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.403367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.603 qpair failed and we were unable to recover it. 00:35:03.603 [2024-05-15 15:53:16.403523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.403663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.403689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.603 qpair failed and we were unable to recover it. 00:35:03.603 [2024-05-15 15:53:16.403831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.403994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.404020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.603 qpair failed and we were unable to recover it. 00:35:03.603 [2024-05-15 15:53:16.404166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.404284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.404311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.603 qpair failed and we were unable to recover it. 00:35:03.603 [2024-05-15 15:53:16.404449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.404586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.404617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.603 qpair failed and we were unable to recover it. 00:35:03.603 [2024-05-15 15:53:16.404730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.404874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.404901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.603 qpair failed and we were unable to recover it. 00:35:03.603 [2024-05-15 15:53:16.405020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.405186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.405212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.603 qpair failed and we were unable to recover it. 00:35:03.603 [2024-05-15 15:53:16.405383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.405522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.405548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.603 qpair failed and we were unable to recover it. 00:35:03.603 [2024-05-15 15:53:16.405717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.405880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.405906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.603 qpair failed and we were unable to recover it. 00:35:03.603 [2024-05-15 15:53:16.406056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.406193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.406223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.603 qpair failed and we were unable to recover it. 00:35:03.603 [2024-05-15 15:53:16.406365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.406476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.406502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.603 qpair failed and we were unable to recover it. 00:35:03.603 [2024-05-15 15:53:16.406644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.406747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.406772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.603 qpair failed and we were unable to recover it. 00:35:03.603 [2024-05-15 15:53:16.406892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.407060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.407086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.603 qpair failed and we were unable to recover it. 00:35:03.603 [2024-05-15 15:53:16.407244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.407377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.407405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.603 qpair failed and we were unable to recover it. 00:35:03.603 [2024-05-15 15:53:16.407548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.603 [2024-05-15 15:53:16.407689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.407717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.407844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.407957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.407984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.408098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.408211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.408245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.408355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.408485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.408511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.408651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.408787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.408813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.408924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.409069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.409095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.409241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.409387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.409414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.409527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.409670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.409696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.409840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.409978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.410005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.410172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.410294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.410321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.410439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.410583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.410609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.410752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.410860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.410887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.411000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.411117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.411144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.411279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.411419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.411446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.411583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.411718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.411744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.411909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.412045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.412071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.412211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.412351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.412378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.412489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.412638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.412664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.412765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.412931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.412957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.413074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.413228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.413255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.413372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.413512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.413538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.413682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.413823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.413850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.413991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.414132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.414158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.414263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.414375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.414402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.414540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.414657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.414683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.414819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.414982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.415008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.415117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.415241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.415267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.415436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.415578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.415604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.415749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.415899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.415926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.416066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.416185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.604 [2024-05-15 15:53:16.416211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.604 qpair failed and we were unable to recover it. 00:35:03.604 [2024-05-15 15:53:16.416330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.416445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.416471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.416645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.416776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.416802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.416951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.417091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.417118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.417285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.417426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.417452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.417596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.417734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.417760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.417900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.418013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.418039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.418171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.418292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.418319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.418496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.418629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.418655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.418774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.418891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.418918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.419083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.419197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.419230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.419374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.419518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.419544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.419687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.419800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.419826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.419956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.420095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.420121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.420235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.420370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.420396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.420537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.420646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.420672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.420787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.420925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.420951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.421092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.421235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.421262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.421405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.421572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.421598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.421749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.421889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.421915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.422068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.422209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.422240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.422351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.422458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.422484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.422634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.422736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.422762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.422879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.422990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.423017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.423154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.423298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.423325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.423466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.423603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.423630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.423770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.423937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.423963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.424075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.424194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.424228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.424369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.424480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.424506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.424648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.424813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.424839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.605 [2024-05-15 15:53:16.424957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.425121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.605 [2024-05-15 15:53:16.425148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.605 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.425294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.425414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.425441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.425586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.425746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.425772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.425908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.426074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.426101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.426241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.426376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.426403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.426541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.426680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.426707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.426870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.427012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.427038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.427174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.427319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.427346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.427482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.427636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.427662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.427775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.427888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.427914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.428065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.428230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.428256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.428374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.428512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.428538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.428676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.428785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.428811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.428931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.429068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.429094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.429260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.429402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.429428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.429567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.429709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.429736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.429871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.430011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.430037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.430139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.430247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.430276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.430413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.430577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.430604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.430715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.430853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.430880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.431044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.431187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.431214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.431347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.431492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.431519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.431653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.431804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.431831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.431967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.432131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.432158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.432292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.432435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.432462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.432626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.432762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.432788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.432930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.433067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.433093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.433233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.433375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.433402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.433554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.433657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.433683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.433831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.433975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.434001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.606 qpair failed and we were unable to recover it. 00:35:03.606 [2024-05-15 15:53:16.434167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.606 [2024-05-15 15:53:16.434319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.434347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.434463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.434632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.434658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.434791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.434937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.434963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.435106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.435243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.435270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.435407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.435551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.435578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.435721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.435833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.435861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.435999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.436163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.436190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.436342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.436444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.436471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.436583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.436725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.436751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.436856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.436992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.437018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.437160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.437302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.437330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.437446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.437586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.437613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.437758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.437877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.437904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.438041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.438214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.438245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.438365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.438497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.438523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.438633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.438773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.438799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.438919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.439050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.439077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.439238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.439348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.439375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.439502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.439647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.439674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.439837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.439955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.439982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.440090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.440194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.440226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.440394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.440511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.440539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.440652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.440782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.440812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.440952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.441095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.441121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.441295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.441407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.441433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.441552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.441662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.441689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.441850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.441988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.442015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.442135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.442273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.442300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.442414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.442524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.442551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.607 [2024-05-15 15:53:16.442661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.442794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.607 [2024-05-15 15:53:16.442820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.607 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.442961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.443099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.443125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.608 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.443263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.443407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.443433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.608 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.443546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.443660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.443690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.608 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.443825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.443944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.443970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.608 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.444133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.444276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.444304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.608 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.444420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.444554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.444581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.608 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.444729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.444844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.444871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.608 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.445030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.445140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.445167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.608 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.445311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.445452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.445480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.608 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.445596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.445756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.445783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.608 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.445902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.446066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.446092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.608 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.446236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.446377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.446403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.608 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.446551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.446691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.446724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.608 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.446851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.446990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.447017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.608 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.447193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.447320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.447350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.608 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.447517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.447629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.447656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.608 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.447824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.447967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.447993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.608 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.448143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.448281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.448307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.608 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.448443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.448557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.448584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.608 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.448718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.448884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.448910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.608 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.449020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.449134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.449161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.608 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.449287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.449404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.449431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.608 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.449548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.449685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.449716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.608 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.449878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.450012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.450039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.608 qpair failed and we were unable to recover it. 00:35:03.608 [2024-05-15 15:53:16.450181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.450328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.608 [2024-05-15 15:53:16.450355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.450510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.450675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.450701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.450862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.451000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.451026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.451160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.451302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.451329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.451442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.451572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.451597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.451763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.451903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.451929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.452039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.452188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.452214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.452363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.452502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.452528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.452695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.452836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.452862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.452988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.453102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.453129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.453238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.453375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.453402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.453543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.453685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.453711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.453819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.453932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.453960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.454105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.454225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.454252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.454365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.454502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.454528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.454664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.454797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.454823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.454961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.455105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.455131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.455268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.455386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.455412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.455556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.455672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.455698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.455839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.455976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.456002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.456124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.456261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.456288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.456431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.456597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.456624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.456740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.456876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.456902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.457017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.457184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.457211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.457356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.457500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.457526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.457664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.457816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.457842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.457982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.458115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.458141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.458274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.458390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.458416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.458548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.458671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.458698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.458840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.458961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.458987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.609 qpair failed and we were unable to recover it. 00:35:03.609 [2024-05-15 15:53:16.459129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.609 [2024-05-15 15:53:16.459262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.459289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.459410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.459544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.459570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.459710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.459851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.459877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.459995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.460111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.460137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.460303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.460412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.460439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.460609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.460768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.460793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.460905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.461048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.461074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.461194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.461313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.461340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.461485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.461623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.461650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.461830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.461967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.461993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.462135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.462270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.462297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.462404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.462569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.462595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.462719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.462869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.462895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.463015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.463162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.463189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.463368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.463542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.463568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.463709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.463814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.463839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.463975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.464111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.464136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.464263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.464404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.464430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.464578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.464723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.464749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.464897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.465017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.465042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.465206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.465336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.465362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.465499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.465614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.465641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.465807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.465970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.465995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.466116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.466260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.466286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.466418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.466536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.466563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.466673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.466793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.466818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.466933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.467769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.467801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.467965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.468087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.468114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.468642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.468796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.468822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.468949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.469066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.610 [2024-05-15 15:53:16.469092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.610 qpair failed and we were unable to recover it. 00:35:03.610 [2024-05-15 15:53:16.469231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.469343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.469370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.469481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.469610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.469638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.469804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.469945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.469972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.470135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.470260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.470287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.470417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.470535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.470563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.470682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.470849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.470875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.471014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.471152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.471178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.471335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.471460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.471486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.471627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.471793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.471819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.471990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.472145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.472173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.472303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.472436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.472474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.472605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.472773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.472800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.472918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.473033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.473060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.473180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.473310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.473338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.473475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.473592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.473618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.473738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.473879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.473905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.474013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.474130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.474156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.474279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.474392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.474418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.474545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.474653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.474679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.474829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.474970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.474997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.475120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.475230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.475258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.475400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.475532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.475559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.475701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.475804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.475830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.475999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.476170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.476196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.476320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.476443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.476469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.476584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.476727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.476754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.476868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.476973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.477000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.477110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.477231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.477268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.477389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.477539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.477565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.611 qpair failed and we were unable to recover it. 00:35:03.611 [2024-05-15 15:53:16.477687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.477797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.611 [2024-05-15 15:53:16.477825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.612 qpair failed and we were unable to recover it. 00:35:03.612 [2024-05-15 15:53:16.477944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.478111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.478137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.612 qpair failed and we were unable to recover it. 00:35:03.612 [2024-05-15 15:53:16.478276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.478422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.478448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.612 qpair failed and we were unable to recover it. 00:35:03.612 [2024-05-15 15:53:16.478563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.478730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.478757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.612 qpair failed and we were unable to recover it. 00:35:03.612 [2024-05-15 15:53:16.478889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.479056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.479082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.612 qpair failed and we were unable to recover it. 00:35:03.612 [2024-05-15 15:53:16.479244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.479398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.479425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.612 qpair failed and we were unable to recover it. 00:35:03.612 [2024-05-15 15:53:16.479569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.479705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.479732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.612 qpair failed and we were unable to recover it. 00:35:03.612 [2024-05-15 15:53:16.479895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.480009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.480035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.612 qpair failed and we were unable to recover it. 00:35:03.612 [2024-05-15 15:53:16.480153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.480279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.480307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.612 qpair failed and we were unable to recover it. 00:35:03.612 [2024-05-15 15:53:16.480446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.480592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.480618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.612 qpair failed and we were unable to recover it. 00:35:03.612 [2024-05-15 15:53:16.480760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.480898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.480925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.612 qpair failed and we were unable to recover it. 00:35:03.612 [2024-05-15 15:53:16.481066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.481203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.481241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.612 qpair failed and we were unable to recover it. 00:35:03.612 [2024-05-15 15:53:16.481389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.481502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.481529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.612 qpair failed and we were unable to recover it. 00:35:03.612 [2024-05-15 15:53:16.481671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.481777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.481803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.612 qpair failed and we were unable to recover it. 00:35:03.612 [2024-05-15 15:53:16.481943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.482088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.482114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.612 qpair failed and we were unable to recover it. 00:35:03.612 [2024-05-15 15:53:16.482259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.482388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.482414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.612 qpair failed and we were unable to recover it. 00:35:03.612 [2024-05-15 15:53:16.482541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.482661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.482689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.612 qpair failed and we were unable to recover it. 00:35:03.612 [2024-05-15 15:53:16.482835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.482953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.482980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.612 qpair failed and we were unable to recover it. 00:35:03.612 [2024-05-15 15:53:16.483116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.483268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.483294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.612 qpair failed and we were unable to recover it. 00:35:03.612 [2024-05-15 15:53:16.483408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.483563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.612 [2024-05-15 15:53:16.483590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.613 qpair failed and we were unable to recover it. 00:35:03.613 [2024-05-15 15:53:16.483727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.483881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.483907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.613 qpair failed and we were unable to recover it. 00:35:03.613 [2024-05-15 15:53:16.484025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.484154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.484180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.613 qpair failed and we were unable to recover it. 00:35:03.613 [2024-05-15 15:53:16.484325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.484472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.484509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.613 qpair failed and we were unable to recover it. 00:35:03.613 [2024-05-15 15:53:16.484650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.484795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.484821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.613 qpair failed and we were unable to recover it. 00:35:03.613 [2024-05-15 15:53:16.484958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.485073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.485100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.613 qpair failed and we were unable to recover it. 00:35:03.613 [2024-05-15 15:53:16.485245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.485386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.485413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.613 qpair failed and we were unable to recover it. 00:35:03.613 [2024-05-15 15:53:16.485537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.485676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.485702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.613 qpair failed and we were unable to recover it. 00:35:03.613 [2024-05-15 15:53:16.485844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.485979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.486005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.613 qpair failed and we were unable to recover it. 00:35:03.613 [2024-05-15 15:53:16.486147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.486309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.486336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.613 qpair failed and we were unable to recover it. 00:35:03.613 [2024-05-15 15:53:16.486474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.486587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.486616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.613 qpair failed and we were unable to recover it. 00:35:03.613 [2024-05-15 15:53:16.486730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.486873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.486900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.613 qpair failed and we were unable to recover it. 00:35:03.613 [2024-05-15 15:53:16.487014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.487129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.487155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.613 qpair failed and we were unable to recover it. 00:35:03.613 [2024-05-15 15:53:16.487291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.487433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.487459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.613 qpair failed and we were unable to recover it. 00:35:03.613 [2024-05-15 15:53:16.487592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.487703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.487730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.613 qpair failed and we were unable to recover it. 00:35:03.613 [2024-05-15 15:53:16.487852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.487987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.488013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.613 qpair failed and we were unable to recover it. 00:35:03.613 [2024-05-15 15:53:16.488176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.488330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.488357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.613 qpair failed and we were unable to recover it. 00:35:03.613 [2024-05-15 15:53:16.488479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.488601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.613 [2024-05-15 15:53:16.488627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.613 qpair failed and we were unable to recover it. 00:35:03.613 [2024-05-15 15:53:16.488737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.488857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.488884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.488992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.489154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.489181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.489335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.489477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.489504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.489616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.489784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.489814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.489982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.490122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.490148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.490294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.490435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.490461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.490573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.490710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.490738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.490908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.491015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.491041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.491191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.491323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.491349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.491463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.491624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.491650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.491772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.491883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.491910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.492052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.492159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.492185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.492323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.492460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.492495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.492639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.492750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.492780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.492927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.493095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.493121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.493243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.493390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.493415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.493567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.493700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.493726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.493841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.493982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.494008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.494147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.494262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.494289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.494402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.494523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.494549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.494678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.494794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.494820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.494963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.495107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.495134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.495261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.495371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.495397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.495537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.495654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.495684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.495828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.495971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.495997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.496132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.496277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.496304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.496440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.496558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.496584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.496727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.496862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.496888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.497014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.497182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.497209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.614 [2024-05-15 15:53:16.497360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.497475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.614 [2024-05-15 15:53:16.497501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.614 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.497626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.497737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.497764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.497929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.498071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.498097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.498234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.498367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.498393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.498521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.498636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.498667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.498829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.498972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.498998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.499141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.499281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.499307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.499443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.499587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.499614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.499756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.499867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.499893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.500033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.500195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.500226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.500337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.500448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.500486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.500598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.500739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.500765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.500894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.501031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.501057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.501198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.501341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.501367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.501517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.501684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.501710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.501881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.502016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.502043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.502161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.502276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.502302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.502406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.502530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.502556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.502690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.502831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.502857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.502999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.503140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.503166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.503277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.503386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.503413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.503554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.503694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.503720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.503853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.503993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.504020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.504129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.504267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.504294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.504433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.504550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.504577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.504751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.504864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.504890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.504993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.505137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.505164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.505318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.505459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.505497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.505631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.505755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.505781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.505926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.506078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.506104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.615 qpair failed and we were unable to recover it. 00:35:03.615 [2024-05-15 15:53:16.506264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.615 [2024-05-15 15:53:16.506384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.506411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.506584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.506701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.506729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.506862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.507023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.507049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.507194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.507357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.507383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.507491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.507638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.507665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.507785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.507925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.507951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.508061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.508194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.508225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.508343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.508454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.508486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.508781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.508945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.508972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.509133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.509279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.509305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.509420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.510236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.510276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.510402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.510549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.510575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.510680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.510796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.510822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.510936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.511049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.511076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.511242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.511385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.511413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.511570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.511706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.511732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.511844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.511966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.511992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.512111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.512248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.512274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.512387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.512499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.512526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.512670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.512778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.512805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.512965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.513083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.513109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.513235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.513379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.513405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.513525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.513693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.513719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.513821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.513935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.513961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.514074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.514206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.514238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.514385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.514526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.514552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.514718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.514833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.514859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.515003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.515113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.515140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.515281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.515423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.515449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.515597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.515704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.616 [2024-05-15 15:53:16.515730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.616 qpair failed and we were unable to recover it. 00:35:03.616 [2024-05-15 15:53:16.515869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.515975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.516002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.516153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.516277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.516305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.516417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.516559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.516585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.516727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.516843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.516869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.516978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.517089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.517117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.517245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.517369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.517396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.517554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.517695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.517722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.517855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.517983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.518010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.518115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.518229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.518266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.518381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.518519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.518546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.518683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.518799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.518825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.518941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.519076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.519103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.519266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.519385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.519411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.519550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.519718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.519744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.519894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.520037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.520063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.520238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.520362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.520388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.520508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.520621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.520647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.520785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.520921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.520947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.521060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.521174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.521200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.521326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.521448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.521474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.521590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.521708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.521734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.521873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.522035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.522061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.522204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.522326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.522353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.522486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.522629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.522655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.522788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.522926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.522952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.617 qpair failed and we were unable to recover it. 00:35:03.617 [2024-05-15 15:53:16.523072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.617 [2024-05-15 15:53:16.523197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.523247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.523365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.523517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.523543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.523663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.523778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.523805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.523936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.524099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.524125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.524265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.524403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.524429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.524559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.524691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.524718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.524886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.525000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.525028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.525162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.525281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.525308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.525428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.525533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.525560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.525702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.525842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.525868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.525972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.526088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.526115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.526273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.526384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.526410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.526553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.526693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.526719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.526838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.526980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.527006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.527145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.527290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.527317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.527461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.527603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.527629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.527765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.527906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.527933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.528073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.528205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.528239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.528359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.528496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.528522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.528652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.528766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.528792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.528908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.529025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.529052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.529222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.529368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.529394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.529504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.529643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.529670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.529804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.529964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.529991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.530127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.530256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.530283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.530425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.530568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.530594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.530707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.530815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.530841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.530982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.531118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.531144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.531261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.531373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.531399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.531563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.531706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.618 [2024-05-15 15:53:16.531731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.618 qpair failed and we were unable to recover it. 00:35:03.618 [2024-05-15 15:53:16.531868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.532014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.532040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.532158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.532275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.532301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.532466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.532579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.532605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.532742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.532857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.532883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.533018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.533160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.533186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.533357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.533501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.533527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.533643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.533775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.533801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.533932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.534096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.534122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.534254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.534370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.534396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.534518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.534651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.534677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.534817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.534965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.534992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.535128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.535293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.535320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.535442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.535567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.535594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.535742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.535877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.535903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.536071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.536200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.536231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.536347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.536469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.536495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.536640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.536753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.536780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.536923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.537088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.537115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.537264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.537405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.537431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.537594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.537734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.537761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.537899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.538045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.538075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.538222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.538341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.538367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.538488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.538626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.538652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.538824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.538963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.538988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.539101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.539228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.539256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.539372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.539514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.539540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.539648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.539782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.539808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.539920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.540072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.540098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.540237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.540357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.540383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.619 qpair failed and we were unable to recover it. 00:35:03.619 [2024-05-15 15:53:16.540520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.540684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.619 [2024-05-15 15:53:16.540710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.540829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.540971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.541002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.541169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.541297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.541324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.541463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.541601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.541628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.541769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.541913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.541940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.542092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.542240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.542266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.542392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.542533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.542559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.542694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.542832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.542859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.542972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.543114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.543140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.543292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.543440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.543466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.543583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.543722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.543749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.543866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.543984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.544015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.544153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.544276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.544302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.544435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.544578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.544604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.544743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.544852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.544879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.545015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.545166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.545191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.545334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.545453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.545479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.545593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.545709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.545735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.545873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.546008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.546034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.546176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.546355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.546381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.546526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.546660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.546687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.546803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.546913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.546943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.547109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.547251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.547277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.547390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.547507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.547533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.547671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.547837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.547862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.548004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.548122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.548149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.548269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.548413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.548439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.548581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.548705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.548731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.548873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.549009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.549036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.549156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.549295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.549321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.620 qpair failed and we were unable to recover it. 00:35:03.620 [2024-05-15 15:53:16.549462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.620 [2024-05-15 15:53:16.549571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.549597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.549763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.549923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.549949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.550097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.550206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.550238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.550372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.550515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.550541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.550682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.550803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.550829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.550939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.551077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.551102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.551264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.551377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.551402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.551550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.551664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.551688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.551796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.551938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.551963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.552107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.552247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.552274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.552392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.552503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.552528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.552668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.552804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.552829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.552952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.553097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.553124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.553264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.553377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.553402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.553513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.553631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.553658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.553822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.553938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.553964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.554125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.554246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.554272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.554391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.554528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.554554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.554697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.554810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.554836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.554951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.555061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.555086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.555251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.555389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.555415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.555558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.555699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.555725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.555873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.555982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.556009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.556125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.556267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.556295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.556435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.556544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.556571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.556709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.556880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.556905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.557019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.557130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.557156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.557300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.557413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.557439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.557579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.557692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.557719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.557884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.558018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.621 [2024-05-15 15:53:16.558043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.621 qpair failed and we were unable to recover it. 00:35:03.621 [2024-05-15 15:53:16.558206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.558330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.558357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.558497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.558615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.558640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.558781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.558893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.558918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.559084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.559197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.559244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.559368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.559483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.559509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.559621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.559752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.559778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.559922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.560032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.560060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.560202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.560349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.560375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.560489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.560625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.560651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.560786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.560903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.560929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.561050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.561193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.561224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.561366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.561505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.561531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.561652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.561815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.561841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.561956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.562068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.562094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.562266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.562401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.562426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.562543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.562684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.562710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.562851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.562972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.562998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.563138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.563282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.563308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.563449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.563590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.563616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.563759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.563877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.563905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.564070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.564182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.564208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.564356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.564500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.564526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.564645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.564761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.564786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.564906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.565023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.565049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.565160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.565278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.565304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.622 qpair failed and we were unable to recover it. 00:35:03.622 [2024-05-15 15:53:16.565444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.622 [2024-05-15 15:53:16.565595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.565620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.565783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.565920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.565946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.566082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.566214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.566246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.566359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.566471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.566498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.566637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.566775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.566802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.566905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.567048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.567073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.567213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.567374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.567401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.567517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.567657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.567682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.567826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.567967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.567994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.568140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.568261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.568288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.568405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.568524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.568551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.568720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.568860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.568887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.569003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.569120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.569146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.569251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.569367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.569393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.569548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.569653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.569678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.569843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.569984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.570009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.570115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.570231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.570258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.570403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.570519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.570544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.570664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.570808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.570834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.570936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.571066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.571092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.571204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.571353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.571379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.571496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.571619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.571645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.571808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.571946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.571972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.572103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.572222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.572249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.572367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.572480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.572505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.572618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.572763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.572789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.572942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.573081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.573106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.573248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.573393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.573420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.573531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.573640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.573667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.573786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.573936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.623 [2024-05-15 15:53:16.573963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.623 qpair failed and we were unable to recover it. 00:35:03.623 [2024-05-15 15:53:16.574131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.574267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.574293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.574440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.574584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.574610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.574742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.574863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.574889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.575029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.575173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.575199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.575345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.575470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.575496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.575631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.575741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.575767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.575906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.576026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.576052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.576199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.576356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.576382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.576497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.576635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.576660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.576771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.576899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.576926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.577068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.577189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.577222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.577389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.577508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.577533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.577672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.577782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.577808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.577947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.578088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.578113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.578261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.578371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.578397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.578501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.578620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.578646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.578765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.578872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.578898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.579007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.579129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.579155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.579280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.579393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.579419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.579526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.579665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.579693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.579836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.579976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.580001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.580146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.580258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.580284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.580385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.580498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.580523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.580636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.580743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.580776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.580897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.581007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.581034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.581180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.581303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.581331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.581458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.581570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.581595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.581703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.581817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.581848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.581964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.582106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.582133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.624 [2024-05-15 15:53:16.582275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.582382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.624 [2024-05-15 15:53:16.582407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.624 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.582521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.582631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.582656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.582785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.582901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.582927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.583063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.583234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.583269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.583438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.583556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.583582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.583701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.583839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.583864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.584002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.584109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.584135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.584285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.584403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.584429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.584571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.584696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.584726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.584863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.585002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.585027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.585143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.585265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.585291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.585412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.585548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.585573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.585684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.585826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.585851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.585958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.586098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.586125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.586244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.586391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.586418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.586533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.586675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.586702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.586812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.586947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.586973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.587114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.587260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.587286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.587404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.587538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.587567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.587680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.587820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.587845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.588002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.588110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.588135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.588275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.588423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.588449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.588565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.588699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.588724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.588858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.588994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.589019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.589158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.589321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.589347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.589463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.589604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.589629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.589743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.589907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.589932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.590047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.590186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.590213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.590343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.590474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.590503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.590644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.590780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.590805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.625 qpair failed and we were unable to recover it. 00:35:03.625 [2024-05-15 15:53:16.590918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.591059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.625 [2024-05-15 15:53:16.591084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.591245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.591359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.591384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.591523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.591683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.591709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.591873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.592037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.592063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.592173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.592329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.592355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.592499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.592666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.592692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.592834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.592973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.592999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.593143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.593285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.593311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.593419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.593563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.593590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.593730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.593837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.593862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.594005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.594145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.594170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.594314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.594425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.594450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.594598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.594729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.594756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.594895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.595004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.595030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.595174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.595294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.595321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.595443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.595589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.595615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.595734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.595897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.595923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.596073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.596190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.596221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.596352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.596468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.596495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.596676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.596818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.596844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.596954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.597097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.597123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.597263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.597380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.597406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.597517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.597657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.597684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.597807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.597922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.597949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.598115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.598225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.598251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.626 qpair failed and we were unable to recover it. 00:35:03.626 [2024-05-15 15:53:16.598416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.598553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.626 [2024-05-15 15:53:16.598579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.598745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.598910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.598936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.599066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.599197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.599245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.599364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.599477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.599502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.599629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.599779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.599804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.599940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.600081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.600107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.600244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.600360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.600386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.600495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.600613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.600639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.600751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.600892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.600917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.601028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.601164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.601191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.601321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.601441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.601467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.601612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.601723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.601749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.601865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.602035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.602062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.602171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.602277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.602304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.602427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.602597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.602623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.602747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.602886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.602912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.603018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.603138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.603164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.603330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.603462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.603488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.603606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.603741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.603767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.603877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.604018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.604044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.604174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.604315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.604341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.604454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.604594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.604619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.604734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.604880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.604905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.605073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.605201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.605232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.605381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.605515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.605541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.605688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.605807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.605833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.605947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.606084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.606109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.606233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.606343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.606368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.606481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.606647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.606673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.606840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.606958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.627 [2024-05-15 15:53:16.606983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.627 qpair failed and we were unable to recover it. 00:35:03.627 [2024-05-15 15:53:16.607124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.607270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.607297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.607437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.607574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.607600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.607764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.607897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.607923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.608044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.608204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.608236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.608380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.608521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.608547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.608688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.608824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.608849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.608963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.609114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.609139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.609278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.609431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.609457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.609597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.609719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.609745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.609898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.610012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.610036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.610151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.610300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.610326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.610445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.610579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.610604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.610754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.610892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.610917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.611038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.611156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.611181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.611314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.611453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.611479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.611588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.611726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.611752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.611909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.612016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.612044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.612160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.612274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.612301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.612422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.612532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.612558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.612722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.612831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.612858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.612998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.613112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.613137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.613278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.613404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.613429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.613571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.613706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.613731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.613876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.613993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.614019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.614146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.614285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.614311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.614456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.614594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.614620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.614760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.614900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.614926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.615073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.615210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.615240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.615374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.615493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.615519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.628 qpair failed and we were unable to recover it. 00:35:03.628 [2024-05-15 15:53:16.615656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.615791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.628 [2024-05-15 15:53:16.615818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.615960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.616080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.616106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.616249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.616392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.616418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.616556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.616669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.616697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.616846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.616964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.616989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.617104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.617226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.617254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.617375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.617494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.617520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.617662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.617803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.617829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.617933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.618051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.618076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.618186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.618356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.618382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.618508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.618651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.618677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.618810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.618971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.618996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.619114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.619249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.619275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.619389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.619526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.619552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.619668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.619780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.619806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.619941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.620073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.620100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.620208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.620355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.620381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.620493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.620631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.620657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.620794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.620944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.620971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.621136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.621255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.621282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.621426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.621538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.621563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.621682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.621842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.621867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.622029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.622172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.622198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.622370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.622479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.622504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.622646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.622793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.622819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.622938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.623083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.623109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.623224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.623364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.623389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.623517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.623654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.623679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.623796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.623904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.623930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.624032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.624168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.624193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.629 qpair failed and we were unable to recover it. 00:35:03.629 [2024-05-15 15:53:16.624324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.629 [2024-05-15 15:53:16.624460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.624489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.624625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.624762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.624788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.624899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.625022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.625047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.625155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.625276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.625302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.625420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.625575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.625602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.625712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.625830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.625855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.626000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.626137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.626164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.626302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.626438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.626475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.626612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.626719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.626744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.626907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.627072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.627097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.627234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.627381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.627407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.627529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.627661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.627688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.627863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.628001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.628038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.628159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.628311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.628336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.628471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.628585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.628609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.628724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.628842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.628873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.628989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.629132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.629157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.629311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.629444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.629480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.629622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.629763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.629790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.629937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.630099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.630125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.630259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.630396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.630421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.630563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.630701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.630726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.630830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.630936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.630961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.631112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.631249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.631286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.631426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.631570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.631596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.631712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.631840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.631871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.632039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.632159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.632185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.632334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.632449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.632480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.632629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.632774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.632800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.630 qpair failed and we were unable to recover it. 00:35:03.630 [2024-05-15 15:53:16.632941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.633078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.630 [2024-05-15 15:53:16.633103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.633226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.633346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.633372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.633515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.633647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.633673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.633783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.633935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.633961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.634102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.634253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.634289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.634431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.634549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.634575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.634690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.634802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.634833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.634977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.635109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.635134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.635249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.635359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.635385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.635557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.635668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.635693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.635835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.635970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.635996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.636111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.636251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.636276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.636397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.636537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.636562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.636698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.636830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.636855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.636977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.637141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.637167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.637277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.637394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.637422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.637542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.637676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.637705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.637823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.637963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.637988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.638128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.638267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.638293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.638440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.638603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.638630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.638769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.638927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.638952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.639068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.639209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.639239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.639363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.639476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.639503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.639652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.639761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.639787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.639923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.640061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.640087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.640194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.640361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.640387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.631 qpair failed and we were unable to recover it. 00:35:03.631 [2024-05-15 15:53:16.640511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.631 [2024-05-15 15:53:16.640678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.640705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.640856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.640992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.641018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.641136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.641257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.641283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.641437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.641583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.641608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.641741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.641854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.641880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.641996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.642137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.642163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.642313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.642428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.642454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.642604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.642747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.642773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.642944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.643059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.643085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.643204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.643378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.643404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.643552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.643665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.643691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.643837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.643976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.644002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.644109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.644256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.644283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.644400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.644529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.644555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.644674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.644840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.644865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.644974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.645139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.645166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.645282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.645415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.645442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.645590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.645722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.645748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.645888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.646004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.646030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.646198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.646319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.646346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.646458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.646630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.646657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.646796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.646938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.646963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.647082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.647226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.647264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.647404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.647556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.647582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.647719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.647856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.647883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.648043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.648174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.648200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.648378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.648520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.648545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.648685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.648825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.648850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.648968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.649083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.649108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.649258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.649377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.632 [2024-05-15 15:53:16.649403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.632 qpair failed and we were unable to recover it. 00:35:03.632 [2024-05-15 15:53:16.649523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.649688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.649714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.649840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.649985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.650011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.650150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.650266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.650292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.650434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.650576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.650601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.650745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.650859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.650885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.651023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.651160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.651186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.651313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.651485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.651512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.651649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.651792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.651817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.651934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.652101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.652126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.652239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.652381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.652406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.652527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.652667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.652692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.652834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.652961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.652988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.653129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.653295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.653321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.653435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.653579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.653605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.653748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.653861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.653888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.654006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.654176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.654203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.654355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.654471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.654497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.654672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.654813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.654839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.654981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.655102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.655127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.655267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.655433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.655459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.655606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.655773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.655800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.655937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.656075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.656101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.656272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.656389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.656415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.656542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.656718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.656744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.656864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.657000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.657026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.657165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.657309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.657335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.657442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.657609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.657635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.657778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.657887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.657913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.658051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.658188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.658234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.633 qpair failed and we were unable to recover it. 00:35:03.633 [2024-05-15 15:53:16.658384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.658522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.633 [2024-05-15 15:53:16.658547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.658711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.658828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.658854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.659003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.659136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.659162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.659299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.659434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.659459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.659606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.659775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.659801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.660181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.660340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.660367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.660519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.660646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.660682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.660802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.660911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.660949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.661074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.661192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.661226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.661398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.661546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.661571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.661725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.661875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.661906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.662029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.662168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.662195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.662367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.662577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.662614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1438000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.662801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.662995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.663030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1438000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.663187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.663366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.663400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1438000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.663529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.663701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.663728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.663889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.664005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.664032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.664175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.664308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.664334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.664503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.664616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.664641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.664785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.664897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.664923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.665034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.665144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.665169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.665288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.665394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.665420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.665546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.665691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.665717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.665883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.665997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.666023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.666186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.666374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.666401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.666547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.666688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.666714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.666840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.666981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.667007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.667157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.667276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.667301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.667416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.667541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.667573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.634 [2024-05-15 15:53:16.667690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.667850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.634 [2024-05-15 15:53:16.667875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.634 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.667986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.668127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.668154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.668308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.668427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.668453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.668619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.668765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.668792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.668930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.669074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.669099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.669221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.669336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.669361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.669495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.669670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.669696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.669810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.669963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.669989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.670101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.670265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.670291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.670430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.670572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.670599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.670765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.670904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.670930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.671069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.671224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.671251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.671393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.671524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.671550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.671714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.671857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.671884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.672028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.672168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.672195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.672320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.672473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.672499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.672626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.672743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.672770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.672909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.673054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.673081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.673199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.673357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.673383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.673551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.673689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.673715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.673855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.674005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.674031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.674198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.674397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.674424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.674570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.674736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.674763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.674926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.675077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.675104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.675259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.675419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.675445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.675594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.675723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.635 [2024-05-15 15:53:16.675749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.635 qpair failed and we were unable to recover it. 00:35:03.635 [2024-05-15 15:53:16.675865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.675979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.676013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.636 qpair failed and we were unable to recover it. 00:35:03.636 [2024-05-15 15:53:16.676162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.676274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.676300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.636 qpair failed and we were unable to recover it. 00:35:03.636 [2024-05-15 15:53:16.676439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.676607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.676632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.636 qpair failed and we were unable to recover it. 00:35:03.636 [2024-05-15 15:53:16.676795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.676942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.676968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.636 qpair failed and we were unable to recover it. 00:35:03.636 [2024-05-15 15:53:16.677085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.677252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.677281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.636 qpair failed and we were unable to recover it. 00:35:03.636 [2024-05-15 15:53:16.677416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.677541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.677567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.636 qpair failed and we were unable to recover it. 00:35:03.636 [2024-05-15 15:53:16.677710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.677825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.677853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.636 qpair failed and we were unable to recover it. 00:35:03.636 [2024-05-15 15:53:16.677976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.678090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.678121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.636 qpair failed and we were unable to recover it. 00:35:03.636 [2024-05-15 15:53:16.678247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.678380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.678405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.636 qpair failed and we were unable to recover it. 00:35:03.636 [2024-05-15 15:53:16.678557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.678668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.678693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.636 qpair failed and we were unable to recover it. 00:35:03.636 [2024-05-15 15:53:16.678835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.678968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.678994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.636 qpair failed and we were unable to recover it. 00:35:03.636 [2024-05-15 15:53:16.679108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.679271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.679298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.636 qpair failed and we were unable to recover it. 00:35:03.636 [2024-05-15 15:53:16.679436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.679587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.679613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.636 qpair failed and we were unable to recover it. 00:35:03.636 [2024-05-15 15:53:16.679740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.679881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.636 [2024-05-15 15:53:16.679907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.636 qpair failed and we were unable to recover it. 00:35:03.636 [2024-05-15 15:53:16.680020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.680134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.680160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.924 qpair failed and we were unable to recover it. 00:35:03.924 [2024-05-15 15:53:16.680321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.680439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.680464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.924 qpair failed and we were unable to recover it. 00:35:03.924 [2024-05-15 15:53:16.680602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.680715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.680741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.924 qpair failed and we were unable to recover it. 00:35:03.924 [2024-05-15 15:53:16.680867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.681009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.681039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.924 qpair failed and we were unable to recover it. 00:35:03.924 [2024-05-15 15:53:16.681174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.681309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.681336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.924 qpair failed and we were unable to recover it. 00:35:03.924 [2024-05-15 15:53:16.681473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.681589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.681616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.924 qpair failed and we were unable to recover it. 00:35:03.924 [2024-05-15 15:53:16.681756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.681896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.681921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.924 qpair failed and we were unable to recover it. 00:35:03.924 [2024-05-15 15:53:16.682063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.682204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.682246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.924 qpair failed and we were unable to recover it. 00:35:03.924 [2024-05-15 15:53:16.682359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.682499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.682526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.924 qpair failed and we were unable to recover it. 00:35:03.924 [2024-05-15 15:53:16.682637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.682776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.682801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.924 qpair failed and we were unable to recover it. 00:35:03.924 [2024-05-15 15:53:16.682962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.683131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.683157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.924 qpair failed and we were unable to recover it. 00:35:03.924 [2024-05-15 15:53:16.683318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.683460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.683496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.924 qpair failed and we were unable to recover it. 00:35:03.924 [2024-05-15 15:53:16.683638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.683757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.683783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.924 qpair failed and we were unable to recover it. 00:35:03.924 [2024-05-15 15:53:16.683916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.684034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.684068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.924 qpair failed and we were unable to recover it. 00:35:03.924 [2024-05-15 15:53:16.684224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.684403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.684430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.924 qpair failed and we were unable to recover it. 00:35:03.924 [2024-05-15 15:53:16.684576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.684700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.684726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.924 qpair failed and we were unable to recover it. 00:35:03.924 [2024-05-15 15:53:16.684862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.685001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.685026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.924 qpair failed and we were unable to recover it. 00:35:03.924 [2024-05-15 15:53:16.685171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.685327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.685352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.924 qpair failed and we were unable to recover it. 00:35:03.924 [2024-05-15 15:53:16.685465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.685639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.685665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.924 qpair failed and we were unable to recover it. 00:35:03.924 [2024-05-15 15:53:16.685796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.685904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.685931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.924 qpair failed and we were unable to recover it. 00:35:03.924 [2024-05-15 15:53:16.686065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.686205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.686237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.924 qpair failed and we were unable to recover it. 00:35:03.924 [2024-05-15 15:53:16.686412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.686551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.924 [2024-05-15 15:53:16.686577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.924 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.686736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.686878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.686904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.687070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.687238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.687278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.687444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.687584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.687610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.687762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.687906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.687931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.688071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.688210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.688241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.688405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.688547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.688574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.688738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.688881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.688907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.689047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.689218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.689245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.689362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.689502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.689530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.689671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.689788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.689814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.689956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.690075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.690103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.690244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.690366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.690392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.690569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.690682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.690707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.690818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.690955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.690980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.691146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.691281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.691307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.691448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.691591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.691617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.691759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.691909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.691935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.692077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.692240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.692266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.692407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.692547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.692572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.692733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.692873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.692898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.693059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.693172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.693198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.693323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.693449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.693475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.693586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.693763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.693789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.693906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.694055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.694081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.694243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.694359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.694385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.694524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.694640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.694668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.694834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.694950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.694977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.695141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.695277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.695304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.695448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.695622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.695648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.925 [2024-05-15 15:53:16.695779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.695891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.925 [2024-05-15 15:53:16.695916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.925 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.696031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.696168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.696195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.696363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.696502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.696528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.696640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.696759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.696785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.696929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.697038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.697065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.697207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.697376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.697403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.697527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.697666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.697693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.697839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.697978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.698004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.698149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.698266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.698294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.698429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.698569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.698595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.698714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.698873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.698899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.699019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.699169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.699196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.699345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.699460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.699487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.699634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.699772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.699800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.699919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.700062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.700087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.700224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.700339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.700365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.700529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.700670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.700695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.700860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.701023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.701050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.701183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.701332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.701359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.701506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.701652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.701678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.701850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.701990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.702017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.702130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.702293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.702320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.702487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.702631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.702657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.702801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.702942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.702967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.703131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.703269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.703295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.703456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.703571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.703597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.703758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.703896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.703922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.704060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.704191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.704221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.704362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.704527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.704553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.704696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.704837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.704863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.704982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.705121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.705147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.705285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.705422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.705448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.926 qpair failed and we were unable to recover it. 00:35:03.926 [2024-05-15 15:53:16.705589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.926 [2024-05-15 15:53:16.705703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.705729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.705866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.706001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.706026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.706165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.706309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.706335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.706501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.706617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.706644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.706785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.706925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.706951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.707088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.707228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.707254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.707395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.707563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.707588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.707752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.707891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.707917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.708053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.708162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.708190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.708317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.708441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.708468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.708576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.708691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.708717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.708840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.708960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.708986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.709103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.709238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.709264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.709408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.709545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.709570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.709709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.709852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.709879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.710017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.710158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.710184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.710359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.710483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.710510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.710647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.710766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.710792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.710938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.711074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.711100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.711267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.711407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.711433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.711581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.711690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.711717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.711831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.711946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.711972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.712114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.712251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.712279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.712447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.712590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.712616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.712722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.712862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.712888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.713006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.713151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.713178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.713352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.713492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.713519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.713682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.713845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.713871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.714004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.714134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.714160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.714331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.714443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.714470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.714579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1479684 Killed "${NVMF_APP[@]}" "$@" 00:35:03.927 [2024-05-15 15:53:16.714713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.714744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.927 [2024-05-15 15:53:16.714911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 [2024-05-15 15:53:16.715028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.927 15:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:35:03.927 [2024-05-15 15:53:16.715054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.927 qpair failed and we were unable to recover it. 00:35:03.928 [2024-05-15 15:53:16.715186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 15:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:03.928 [2024-05-15 15:53:16.715347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 15:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:03.928 [2024-05-15 15:53:16.715373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.928 qpair failed and we were unable to recover it. 00:35:03.928 15:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:03.928 [2024-05-15 15:53:16.715507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 15:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:03.928 [2024-05-15 15:53:16.715644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.715670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.928 qpair failed and we were unable to recover it. 00:35:03.928 [2024-05-15 15:53:16.715842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.715959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.715985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.928 qpair failed and we were unable to recover it. 00:35:03.928 [2024-05-15 15:53:16.716131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.716264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.716290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.928 qpair failed and we were unable to recover it. 00:35:03.928 [2024-05-15 15:53:16.716402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.716544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.716570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.928 qpair failed and we were unable to recover it. 00:35:03.928 [2024-05-15 15:53:16.716703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.716842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.716868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.928 qpair failed and we were unable to recover it. 00:35:03.928 [2024-05-15 15:53:16.716976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.717118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.717143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.928 qpair failed and we were unable to recover it. 00:35:03.928 [2024-05-15 15:53:16.717263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.717385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.717410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.928 qpair failed and we were unable to recover it. 00:35:03.928 [2024-05-15 15:53:16.717586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.717731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.717757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.928 qpair failed and we were unable to recover it. 00:35:03.928 [2024-05-15 15:53:16.717899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.718038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.718064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.928 qpair failed and we were unable to recover it. 00:35:03.928 [2024-05-15 15:53:16.718209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.718360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.718387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.928 qpair failed and we were unable to recover it. 00:35:03.928 [2024-05-15 15:53:16.718503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.718640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.718665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.928 qpair failed and we were unable to recover it. 00:35:03.928 [2024-05-15 15:53:16.718811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.718957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.718984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.928 qpair failed and we were unable to recover it. 00:35:03.928 [2024-05-15 15:53:16.719149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.719289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.719327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.928 qpair failed and we were unable to recover it. 00:35:03.928 [2024-05-15 15:53:16.719437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.719586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.719609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.928 qpair failed and we were unable to recover it. 00:35:03.928 [2024-05-15 15:53:16.719750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.719868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.719894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.928 qpair failed and we were unable to recover it. 00:35:03.928 [2024-05-15 15:53:16.720044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.720208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.720240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.928 qpair failed and we were unable to recover it. 00:35:03.928 [2024-05-15 15:53:16.720388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.720505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.720534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.928 qpair failed and we were unable to recover it. 00:35:03.928 [2024-05-15 15:53:16.720668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 15:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1480241 00:35:03.928 [2024-05-15 15:53:16.720829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.720854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.928 15:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:03.928 qpair failed and we were unable to recover it. 00:35:03.928 15:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1480241 00:35:03.928 [2024-05-15 15:53:16.721000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.721142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.721167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b9 15:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1480241 ']' 00:35:03.928 0 with addr=10.0.0.2, port=4420 00:35:03.928 qpair failed and we were unable to recover it. 00:35:03.928 [2024-05-15 15:53:16.721329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 15:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:03.928 [2024-05-15 15:53:16.721444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 [2024-05-15 15:53:16.721469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.928 qpair failed and we were unable to recover it. 00:35:03.928 15:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:03.928 [2024-05-15 15:53:16.721612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.928 15:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:03.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:03.928 [2024-05-15 15:53:16.721755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.721782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 15:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:03.929 [2024-05-15 15:53:16.721921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 15:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:03.929 [2024-05-15 15:53:16.722062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.722087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.722244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.722388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.722413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.722538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.722691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.722715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.722855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.722991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.723018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.723128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.723274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.723300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.723443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.723559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.723585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.723724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.723837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.723862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.723972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.724087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.724112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.724267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.724433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.724457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.724571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.724736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.724761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.724875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.725015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.725039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.725180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.725295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.725320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.725457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.725607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.725632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.725770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.725905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.725930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.726073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.726185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.726209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.726388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.726501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.726526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.726689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.726802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.726827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.726949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.727052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.727077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.727223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.727370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.727395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.727531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.727670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.727695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.727812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.727942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.727967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.728083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.728227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.728251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.728391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.728510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.728534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.728673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.728779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.728804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.728967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.729106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.729131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.729266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.729398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.729423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.729599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.729745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.729770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.729910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.730047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.730072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.730229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.730366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.730391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.730533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.730645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.730670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.929 [2024-05-15 15:53:16.730835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.730976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.929 [2024-05-15 15:53:16.731000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.929 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.731126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.731272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.731296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.731409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.731549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.731574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.731710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.731828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.731852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.731998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.732163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.732187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.732313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.732477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.732503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.732649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.732790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.732815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.732958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.733108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.733132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.733250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.733389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.733414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.733524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.733636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.733659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.733775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.733885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.733909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.734030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.734160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.734185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.734335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.734446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.734475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.734638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.734744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.734769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.734883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.735021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.735046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.735182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.735340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.735366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.735483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.735625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.735651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.735827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.735966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.735991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.736105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.736246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.736271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.736385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.736495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.736519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.736660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.736816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.736841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.736979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.737098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.737123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.737282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.737392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.737421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.737587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.737700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.737725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.737863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.737976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.738002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.738135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.738249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.738276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.738388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.738525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.738550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.738715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.738833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.738858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.738972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.739084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.739108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.739251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.739394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.739418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.739558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.739718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.739744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.739887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.740060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.740084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.930 [2024-05-15 15:53:16.740225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.740394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.930 [2024-05-15 15:53:16.740423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.930 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.740587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.740730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.740754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.740889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.741002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.741026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.741138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.741272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.741297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.741417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.741589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.741614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.741750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.741886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.741910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.742051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.742229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.742253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.742392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.742560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.742585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.742746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.742889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.742914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.743059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.743168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.743193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.743359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.743466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.743496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.743618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.743755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.743779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.743917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.744056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.744081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.744225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.744357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.744382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.744502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.744672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.744696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.744812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.744950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.744974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.745112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.745223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.745249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.745367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.745506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.745531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.745697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.745832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.745856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.745985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.746148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.746176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.746339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.746475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.746502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.746674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.746796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.746822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.746960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.747097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.747121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.747251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.747395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.747420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.747559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.747705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.747730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.747893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.748059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.748084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.748225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.748339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.748363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.748480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.748641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.748665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.748832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.748947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.748972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.749113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.749255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.749281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.749400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.749538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.749563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.749674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.749840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.749865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.931 qpair failed and we were unable to recover it. 00:35:03.931 [2024-05-15 15:53:16.750015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.931 [2024-05-15 15:53:16.750152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.750176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.750331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.750467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.750493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.750612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.750726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.750750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.750864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.751023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.751048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.751163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.751308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.751335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.751497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.751636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.751661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.751775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.751914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.751939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.752103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.752241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.752275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.752409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.752571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.752596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.752749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.752862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.752887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.753025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.753158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.753183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.753333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.753469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.753493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.753655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.753792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.753817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.753968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.754147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.754175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.754337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.754447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.754472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.754662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.754812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.754839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.755047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.755245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.755288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.755455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.755569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.755594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.755730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.755866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.755891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.756038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.756196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.756225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.756367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.756503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.756528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.756639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.756788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.756813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.756950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.757087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.757112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.757250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.757420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.757445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.757583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.757689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.757714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.757850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.757981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.758006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.932 qpair failed and we were unable to recover it. 00:35:03.932 [2024-05-15 15:53:16.758150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.758279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.932 [2024-05-15 15:53:16.758308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.758468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.758584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.758609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.758747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.758907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.758932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.759071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.759237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.759262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.759411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.759549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.759573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.759711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.759849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.759874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.760012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.760151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.760179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.760348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.760457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.760481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.760619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.760755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.760779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.760950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.761113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.761140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.761300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.761502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.761531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.761810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.761992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.762019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.762144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.762263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.762287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.762406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.762525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.762549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.762709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.762875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.762899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.763014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.763164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.763192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.763326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.763465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.763490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.763633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.763797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.763822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.763931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.764067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.764092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.764254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.764370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.764394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.764533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.764676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.764701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.764840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.764982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.765005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.765146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.765256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.765281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.765457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.765615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.765640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.765756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.765899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.765923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.766054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.766209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.766238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.766358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.766521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.766545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.766678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.766814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.766839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.767001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.767025] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:35:03.933 [2024-05-15 15:53:16.767100] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:03.933 [2024-05-15 15:53:16.767117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.767141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.767288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.767432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.767457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.767591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.767705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.767729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.933 [2024-05-15 15:53:16.767844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.767979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.933 [2024-05-15 15:53:16.768005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.933 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.768146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.768266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.768291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.768442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.768602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.768632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.768782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.768922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.768951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.769108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.769226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.769250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.769396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.769564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.769588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.769773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.769961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.769989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.770143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.770314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.770343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.770520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.770699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.770727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.770930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.771066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.771091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.771203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.771383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.771412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.771585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.771828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.771856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.771990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.772151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.772176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.772342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.772523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.772550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.772729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.772979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.773006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.773160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.773288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.773314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.773450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.773580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.773607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.773775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.773952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.773979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.774160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.774283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.774324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.774477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.774653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.774681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.774843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.775035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.775058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.775170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.775360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.775388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.775539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.775739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.775766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.775921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.776037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.776061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.776194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.776373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.776397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.776534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.776643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.776668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.776841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.776960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.776985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.777118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.777257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.777282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.777420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.777556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.777580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.777727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.777890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.777914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.778064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.778229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.778271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.778413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.778653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.934 [2024-05-15 15:53:16.778685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.934 qpair failed and we were unable to recover it. 00:35:03.934 [2024-05-15 15:53:16.778851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.778992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.779016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.779152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.779345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.779371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.779486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.779658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.779700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.779908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.780085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.780110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.780253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.780392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.780419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.780564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.780692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.780716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.780836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.780974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.780999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.781167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.781331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.781356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.781490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.781603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.781628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.781782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.781919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.781948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.782088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.782225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.782250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.782390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.782535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.782559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.782698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.782836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.782861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.782972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.783108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.783133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.783262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.783425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.783449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.783562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.783677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.783702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.783863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.784009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.784034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.784170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.784311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.784336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.784480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.784616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.784641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.784780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.784915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.784944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.785061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.785197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.785228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.785396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.785509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.785533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.785701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.785810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.785835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.785948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.786056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.786082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.786204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.786349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.786375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.786513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.786660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.786684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.786802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.786942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.786968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.787141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.787309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.787334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.787482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.787616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.787641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.787778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.787917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.787944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.788089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.788240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.788265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.788402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.788538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.935 [2024-05-15 15:53:16.788564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.935 qpair failed and we were unable to recover it. 00:35:03.935 [2024-05-15 15:53:16.788703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.788861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.788886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.789002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.789140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.789165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.789308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.789431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.789456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.789593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.789731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.789758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.789933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.790044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.790068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.790212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.790331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.790355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.790491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.790597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.790621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.790764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.790898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.790923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.791070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.791181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.791207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.791330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.791469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.791495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.791635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.791798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.791823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.791964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.792129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.792153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.792269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.792414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.792439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.792581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.792719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.792744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.792863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.792997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.793022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.793161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.793302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.793328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.793470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.793585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.793611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.793747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.793887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.793911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.794027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.794197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.794227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.794345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.794491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.794516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.794679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.794822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.794847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.794948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.795089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.795113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.795256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.795394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.795418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.795532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.795697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.795721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.795873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.795984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.796009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.796177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.796321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.796347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.936 qpair failed and we were unable to recover it. 00:35:03.936 [2024-05-15 15:53:16.796463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.936 [2024-05-15 15:53:16.796604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.796629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.796750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.796900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.796926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.797108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.797244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.797271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.797435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.797574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.797598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.797705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.797877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.797902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.798041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.798207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.798237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.798373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.798511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.798536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.798641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.798775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.798800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.798965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.799078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.799103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.799255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.799372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.799397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.799560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.799675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.799700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.799841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.799981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.800006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.800150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.800286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.800311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.800431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.800565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.800589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.800759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.800871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.800896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.801033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.801149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.801173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.801341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.801506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.801531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.801673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.801837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.801863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.801980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.802117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.802141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.802309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.802475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.802501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.802641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.802782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.802807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.802935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.803054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.803079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.803226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.803337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.803361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.803525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.803661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.803685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.803821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.803983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.804007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.804148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.804283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.804308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.804422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.804558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.804584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.804698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.804839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.804864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.804980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.805117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.805142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.805299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.805437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.805462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.937 qpair failed and we were unable to recover it. 00:35:03.937 [2024-05-15 15:53:16.805574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.805679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.937 [2024-05-15 15:53:16.805704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.805865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.806000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.806025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.806143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.806279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.806305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.806420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.806531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.806556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.806696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.806830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.806854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 EAL: No free 2048 kB hugepages reported on node 1 00:35:03.938 [2024-05-15 15:53:16.806991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.807148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.807172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.807320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.807459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.807484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.807656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.807799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.807824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.807943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.808087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.808113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.808248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.808385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.808410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.808524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.808657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.808682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.808825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.808961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.808986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.809151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.809286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.809312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.809435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.809577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.809601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.809745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.809856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.809880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.810022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.810173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.810197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.810339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.810484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.810509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.810675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.810804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.810828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.810990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.811005] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:03.938 [2024-05-15 15:53:16.811106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.811130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.811280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.811446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.811470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.811588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.811703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.811728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.811843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.811957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.811985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.812101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.812233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.812259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.812403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.812576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.812600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.812715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.812853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.812878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.813023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.813159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.813183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.813327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.813437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.813461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.813576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.813721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.813744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.813857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.813967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.813991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.814127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.814264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.814289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.814410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.814554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.814578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.938 qpair failed and we were unable to recover it. 00:35:03.938 [2024-05-15 15:53:16.814692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.814854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.938 [2024-05-15 15:53:16.814879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.815001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.815141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.815166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.815311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.815430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.815455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.815610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.815727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.815751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.815914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.816059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.816083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.816214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.816333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.816358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.816499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.816637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.816663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.816802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.816943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.816969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.817107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.817261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.817286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.817416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.817585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.817609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.817752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.817863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.817887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.818009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.818172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.818196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.818322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.818445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.818469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.818610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.818727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.818752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.818872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.819005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.819031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.819167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.819279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.819304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.819443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.819623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.819649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.819785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.819901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.819925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.820033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.820142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.820167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.820316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.820479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.820503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.820621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.820736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.820760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.820886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.821024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.821047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.821184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.821327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.821352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.821467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.821651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.821674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.821794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.821924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.821949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.822084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.822226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.822252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.822391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.822521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.822545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.822690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.822839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.822863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.823015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.823155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.823179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.823305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.823444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.823468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.823637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.823749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.823774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.823921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.824037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.824063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.939 qpair failed and we were unable to recover it. 00:35:03.939 [2024-05-15 15:53:16.824178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.824348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.939 [2024-05-15 15:53:16.824372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.824518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.824624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.824648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.824797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.824908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.824932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.825069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.825231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.825256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.825396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.825533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.825558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.825696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.825843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.825868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.825993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.826108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.826134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.826252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.826366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.826390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.826513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.826635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.826659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.826798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.826961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.826986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.827124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.827254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.827279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.827422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.827560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.827584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.827741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.827879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.827903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.828032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.828146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.828172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.828296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.828436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.828461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.828605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.828767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.828791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.828900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.829016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.829040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.829152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.829295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.829320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.829463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.829625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.829650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.829770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.829882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.829906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.830031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.830190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.830221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.830362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.830479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.830504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.830623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.830850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.830875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.831017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.831128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.831152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.831290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.831406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.831432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.831579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.831724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.831747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.831925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.832071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.832097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.832236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.832351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.832375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.832503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.832613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.832637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.832802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.832910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.832935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.833094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.833236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.833261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.940 qpair failed and we were unable to recover it. 00:35:03.940 [2024-05-15 15:53:16.833375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.833499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.940 [2024-05-15 15:53:16.833524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.833670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.833812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.833836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.833974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.834092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.834116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.834252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.834384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.834408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.834524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.834639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.834663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.834778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.834884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.834908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.835070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.835177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.835202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.835356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.835502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.835526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.835639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.835807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.835832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.835983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.836119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.836143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.836288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.836422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.836446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.836579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.836703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.836728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.836845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.836950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.836974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.837138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.837286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.837312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.837422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.837568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.837594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.837744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.837880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.837904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.838053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.838193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.838235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.838351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.838490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.838514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.838653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.838771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.838795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.838935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.839074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.839099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.839220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.839358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.839383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.839522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.839661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.839686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.839801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.839951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.839975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.840123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.840261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.840286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.840430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.840569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.840595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.840741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.840910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.840935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.841051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.841170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.841195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.841322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.841431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.841455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.941 [2024-05-15 15:53:16.841595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.841707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.941 [2024-05-15 15:53:16.841732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.941 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.841862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.841980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.842004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.842152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.842272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.842297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.842443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.842610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.842634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.842755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.842867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.842893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.843010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.843167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.843191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.843344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.843463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.843488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.843627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.843758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.843783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.843893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.844027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.844052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.844171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.844279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.844305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.844438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.844574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.844599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.844704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.844862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.844887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.845030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.845139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.845163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.845308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.845441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.845465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.845575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.845713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.845738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.845854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.846007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.846032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.846144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.846290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.846315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.846457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.846603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.846628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.846657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:03.942 [2024-05-15 15:53:16.846768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.846900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.846925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.847057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.847230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.847256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.847397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.847538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.847562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.847706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.847850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.847875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.848022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.848133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.848158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.848293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.848396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.848420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.848544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.848685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.848711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.848869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.848983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.849009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.849151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.849292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.849318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.849439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.849549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.849574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.849690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.849811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.849837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.849972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.850114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.850139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.850278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.850412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.850438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.850580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.850681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.850706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.942 [2024-05-15 15:53:16.850822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.850985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.942 [2024-05-15 15:53:16.851014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.942 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.851165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.851300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.851325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.851492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.851603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.851629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.851772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.851886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.851911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.852054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.852171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.852197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.852349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.852491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.852516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.852658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.852796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.852821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.852953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.853092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.853118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.853236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.853389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.853414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.853554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.853667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.853693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.853804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.853978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.854004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.854147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.854269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.854295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.854420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.854551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.854576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.854750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.854865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.854890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.855027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.855198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.855227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.855345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.855454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.855479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.855640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.855779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.855805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.855922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.856056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.856082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.856199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.856327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.856353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.856494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.856635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.856661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.856791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.856953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.856978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.857095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.857208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.857240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.857356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.857477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.857503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.857621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.857765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.857790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.857895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.858006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.858031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.858190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.858387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.858412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.858557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.858673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.858697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.858837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.858948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.858973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.859090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.859209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.859270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.859420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.859539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.859565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.859708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.859844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.859868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.860051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.860164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.860188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.943 qpair failed and we were unable to recover it. 00:35:03.943 [2024-05-15 15:53:16.860312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.943 [2024-05-15 15:53:16.860450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.860474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.860615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.860727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.860753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.860901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.861023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.861049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.861187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.861328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.861353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.861491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.861604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.861631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.861798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.861940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.861968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.862089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.862233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.862263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.862383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.862522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.862546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.862693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.862808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.862832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.863011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.863139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.863164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.863308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.863424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.863449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.863594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.863762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.863787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.863932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.864039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.864064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.864199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.864345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.864370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.864518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.864622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.864647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.864785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.864886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.864911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.865029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.865167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.865200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.865363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.865506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.865530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.865649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.865792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.865817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.865962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.866100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.866125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.866270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.866436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.866462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.866572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.866691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.866715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.866854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.866989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.867013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.867141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.867249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.867274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.867414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.867529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.867554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.867672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.867782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.867807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.867956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.868090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.868119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.868257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.868394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.868419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.868557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.868696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.868721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.868860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.868997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.869024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.869140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.869274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.869300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.869410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.869518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.869543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.944 qpair failed and we were unable to recover it. 00:35:03.944 [2024-05-15 15:53:16.869689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.869830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.944 [2024-05-15 15:53:16.869855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.870008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.870150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.870175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.870322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.870431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.870457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.870575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.870678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.870702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.870819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.870953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.870978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.871091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.871250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.871275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.871419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.871529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.871553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.871671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.871839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.871864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.871995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.872109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.872134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.872267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.872439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.872464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.872605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.872720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.872745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.872899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.873025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.873049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.873204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.873357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.873382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.873501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.873636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.873662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.873826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.873943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.873968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.874096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.874232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.874257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.874400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.874624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.874649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.874789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.874956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.874982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.875098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.875237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.875263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.875401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.875529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.875553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.875667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.875777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.875802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.875944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.876083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.876108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.876258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.876396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.876420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.876537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.876643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.876668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.876795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.876931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.876957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.877107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.877272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.877299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.877416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.877552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.877578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.877722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.877863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.945 [2024-05-15 15:53:16.877888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.945 qpair failed and we were unable to recover it. 00:35:03.945 [2024-05-15 15:53:16.878008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.878147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.878171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.878294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.878437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.878461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.878584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.878736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.878761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.878877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.879045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.879069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.879208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.879361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.879387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.879506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.879624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.879648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.879760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.879931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.879956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.880080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.880235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.880261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.880416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.880582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.880607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.880746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.880865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.880891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.881032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.881149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.881173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.881311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.881454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.881479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.881624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.881759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.881783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.881916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.882054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.882078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.882224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.882345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.882370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.882507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.882654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.882680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.882795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.882941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.882966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.883093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.883260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.883285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.883422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.883583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.883608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.883744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.883885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.883910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.884028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.884130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.884155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.884284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.884398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.884423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.884568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.884679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.884704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.884814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.884965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.884991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.885145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.885256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.885281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.885388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.885524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.885550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.885692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.885808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.885833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.886011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.886174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.886199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.886322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.886466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.886491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.886631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.886776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.886802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.886968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.887080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.887104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.887279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.887417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.946 [2024-05-15 15:53:16.887441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.946 qpair failed and we were unable to recover it. 00:35:03.946 [2024-05-15 15:53:16.887569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.887729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.887753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.887891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.888030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.888055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.888164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.888276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.888301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.888443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.888554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.888581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.888697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.888840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.888865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.889025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.889133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.889160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.889281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.889421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.889445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.889587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.889696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.889722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.889857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.890019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.890043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.890192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.890337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.890363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.890505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.890619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.890644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.890787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.890951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.890976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.891121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.891257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.891283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.891453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.891599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.891624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.891761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.891874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.891900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.892045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.892181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.892205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.892352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.892461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.892485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.892621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.892766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.892790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.892957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.893102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.893127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.893243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.893354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.893379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.893544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.893661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.893686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.893808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.893972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.893997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.894162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.894335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.894361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.894501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.894626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.894651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.894763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.894903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.894928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.895065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.895208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.895239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.895354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.895518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.895543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.895678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.895816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.895841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.896006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.896114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.896139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.896262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.896397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.896422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.896530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.896693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.896717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.896832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.896971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.896995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.947 qpair failed and we were unable to recover it. 00:35:03.947 [2024-05-15 15:53:16.897145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.947 [2024-05-15 15:53:16.897314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.897341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.897481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.897607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.897631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.897751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.897889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.897913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.898046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.898191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.898232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.898375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.898511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.898537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.898681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.898794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.898820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.898989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.899108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.899133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.899259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.899424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.899448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.899616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.899733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.899758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.899895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.900034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.900060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.900169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.900323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.900349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.900496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.900661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.900686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.900823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.900995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.901019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.901188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.901318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.901344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.901455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.901607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.901632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.901772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.901911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.901936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.902106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.902225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.902250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.902390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.902553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.902578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.902720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.902854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.902878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.903020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.903159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.903185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.903363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.903478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.903508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.903621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.903764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.903788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.903912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.904018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.904042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.904159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.904294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.904319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.904460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.904607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.904632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.904752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.904895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.904920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.905064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.905199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.905231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.905348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.905486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.905521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.905645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.905763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.905789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.905904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.906048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.906073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.906223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.906386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.906411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.948 qpair failed and we were unable to recover it. 00:35:03.948 [2024-05-15 15:53:16.906525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.906660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.948 [2024-05-15 15:53:16.906685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.906826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.906940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.906965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.907123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.907266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.907296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.907415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.907554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.907581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.907692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.907828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.907853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.907964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.908126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.908151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.908266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.908372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.908398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.908515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.908658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.908683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.908838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.908960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.908984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.909121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.909234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.909259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.909372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.909524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.909548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.909714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.909852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.909877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.910014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.910152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.910185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.910341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.910509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.910533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.910690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.910806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.910831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.910999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.911110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.911136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.911248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.911363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.911388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.911501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.911643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.911669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.911786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.911924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.911949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.912092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.912231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.912256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.912426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.912544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.912569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.912698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.912836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.912861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.912997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.913104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.913135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.913262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.913400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.913426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.913537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.913679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.913704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.913846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.913976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.914001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.914165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.914302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.914327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.914502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.914664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.914689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.949 qpair failed and we were unable to recover it. 00:35:03.949 [2024-05-15 15:53:16.914853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.949 [2024-05-15 15:53:16.914965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.914990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.915130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.915264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.915290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.915432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.915551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.915577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.915725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.915883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.915908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.916056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.916166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.916197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.916353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.916499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.916525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.916657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.916782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.916807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.916941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.917089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.917114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.917253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.917377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.917402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.917553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.917694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.917719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.917852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.918015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.918040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.918189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.918310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.918336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.918468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.918587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.918611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.918747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.918892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.918919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.919037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.919148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.919173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.919301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.919414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.919439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.919605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.919716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.919740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.919856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.919975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.920001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.920143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.920261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.920287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.920424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.920561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.920587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.920736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.920900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.920924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.921069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.921203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.921235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.921373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.921493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.921517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.921636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.921773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.921799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.921912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.922029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.922055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.922200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.922327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.922352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.922478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.922591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.922617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.922758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.922894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.922920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.923026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.923167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.923192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.923355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.923501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.923525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.923691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.923808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.923834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.923995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.924139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.924164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.950 qpair failed and we were unable to recover it. 00:35:03.950 [2024-05-15 15:53:16.924308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.950 [2024-05-15 15:53:16.924421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.924446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.924587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.924696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.924720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.924886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.925000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.925025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.925197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.925337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.925361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.925529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.925667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.925694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.925819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.925939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.925964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.926122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.926269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.926294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.926437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.926549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.926574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.926689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.926808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.926833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.926954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.927088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.927113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.927226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.927380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.927405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.927550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.927683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.927708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.927851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.927992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.928017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.928197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.928348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.928374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.928480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.928587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.928614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.928740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.928854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.928880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.928994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.929135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.929162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.929283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.929407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.929432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.929574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.929713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.929737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.929854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.930018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.930043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.930158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.930265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.930290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.930406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.930568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.930593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.930708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.930844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.930869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.931015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.931127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.931154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.931303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.931472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.931497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.931612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.931729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.931754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.931875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.932020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.932046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.932167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.932274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.932299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.932403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.932517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.932541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.932663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.932777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.932802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.951 [2024-05-15 15:53:16.932912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.933024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.951 [2024-05-15 15:53:16.933048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.951 qpair failed and we were unable to recover it. 00:35:03.952 [2024-05-15 15:53:16.933162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.933229] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:03.952 [2024-05-15 15:53:16.933265] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:03.952 [2024-05-15 15:53:16.933280] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:03.952 [2024-05-15 15:53:16.933293] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:03.952 [2024-05-15 15:53:16.933295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.933304] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:03.952 [2024-05-15 15:53:16.933318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.952 qpair failed and we were unable to recover it. 00:35:03.952 [2024-05-15 15:53:16.933421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.933394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:35:03.952 [2024-05-15 15:53:16.933422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:35:03.952 [2024-05-15 15:53:16.933535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.933558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.952 [2024-05-15 15:53:16.933470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:35:03.952 qpair failed and we were unable to recover it. 00:35:03.952 [2024-05-15 15:53:16.933473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:35:03.952 [2024-05-15 15:53:16.933671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.933787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.933811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.952 qpair failed and we were unable to recover it. 00:35:03.952 [2024-05-15 15:53:16.933974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.934093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.934118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.952 qpair failed and we were unable to recover it. 00:35:03.952 [2024-05-15 15:53:16.934259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.934372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.934397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.952 qpair failed and we were unable to recover it. 00:35:03.952 [2024-05-15 15:53:16.934518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.934657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.934682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.952 qpair failed and we were unable to recover it. 00:35:03.952 [2024-05-15 15:53:16.934811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.934958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.934983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.952 qpair failed and we were unable to recover it. 00:35:03.952 [2024-05-15 15:53:16.935120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.935242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.935268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.952 qpair failed and we were unable to recover it. 00:35:03.952 [2024-05-15 15:53:16.935410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.935527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.935552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.952 qpair failed and we were unable to recover it. 00:35:03.952 [2024-05-15 15:53:16.935689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.935827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.935852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.952 qpair failed and we were unable to recover it. 00:35:03.952 [2024-05-15 15:53:16.935974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.936112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.936136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.952 qpair failed and we were unable to recover it. 00:35:03.952 [2024-05-15 15:53:16.936258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.936364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.936389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.952 qpair failed and we were unable to recover it. 00:35:03.952 [2024-05-15 15:53:16.936492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.936604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.936629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.952 qpair failed and we were unable to recover it. 00:35:03.952 [2024-05-15 15:53:16.936755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.936866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.936891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.952 qpair failed and we were unable to recover it. 00:35:03.952 [2024-05-15 15:53:16.937040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.937184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.937208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.952 qpair failed and we were unable to recover it. 00:35:03.952 [2024-05-15 15:53:16.937357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.937471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.937496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.952 qpair failed and we were unable to recover it. 00:35:03.952 [2024-05-15 15:53:16.937650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.937785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.937810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.952 qpair failed and we were unable to recover it. 00:35:03.952 [2024-05-15 15:53:16.937931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.938039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.938064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.952 qpair failed and we were unable to recover it. 00:35:03.952 [2024-05-15 15:53:16.938185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.938302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.938328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.952 qpair failed and we were unable to recover it. 00:35:03.952 [2024-05-15 15:53:16.938468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.938584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.938610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.952 qpair failed and we were unable to recover it. 00:35:03.952 [2024-05-15 15:53:16.938756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.938872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.938897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.952 qpair failed and we were unable to recover it. 00:35:03.952 [2024-05-15 15:53:16.939014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.939125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.952 [2024-05-15 15:53:16.939150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.952 qpair failed and we were unable to recover it. 00:35:03.952 [2024-05-15 15:53:16.939271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.939419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.939444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.953 qpair failed and we were unable to recover it. 00:35:03.953 [2024-05-15 15:53:16.939585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.939724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.939749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.953 qpair failed and we were unable to recover it. 00:35:03.953 [2024-05-15 15:53:16.939861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.939974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.939999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.953 qpair failed and we were unable to recover it. 00:35:03.953 [2024-05-15 15:53:16.940115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.940237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.940263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.953 qpair failed and we were unable to recover it. 00:35:03.953 [2024-05-15 15:53:16.940373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.940511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.940536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.953 qpair failed and we were unable to recover it. 00:35:03.953 [2024-05-15 15:53:16.940642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.940805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.940830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.953 qpair failed and we were unable to recover it. 00:35:03.953 [2024-05-15 15:53:16.940941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.941064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.941089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.953 qpair failed and we were unable to recover it. 00:35:03.953 [2024-05-15 15:53:16.941209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.941322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.941349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.953 qpair failed and we were unable to recover it. 00:35:03.953 [2024-05-15 15:53:16.941462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.941569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.941595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.953 qpair failed and we were unable to recover it. 00:35:03.953 [2024-05-15 15:53:16.941744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.941877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.941902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.953 qpair failed and we were unable to recover it. 00:35:03.953 [2024-05-15 15:53:16.942027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.942184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.942210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.953 qpair failed and we were unable to recover it. 00:35:03.953 [2024-05-15 15:53:16.942329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.942445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.942470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.953 qpair failed and we were unable to recover it. 00:35:03.953 [2024-05-15 15:53:16.942611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.942721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.942746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.953 qpair failed and we were unable to recover it. 00:35:03.953 [2024-05-15 15:53:16.942886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.943050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.943076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.953 qpair failed and we were unable to recover it. 00:35:03.953 [2024-05-15 15:53:16.943210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.943333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.943358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.953 qpair failed and we were unable to recover it. 00:35:03.953 [2024-05-15 15:53:16.943494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.943601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.943626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.953 qpair failed and we were unable to recover it. 00:35:03.953 [2024-05-15 15:53:16.943745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.943909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.943934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.953 qpair failed and we were unable to recover it. 00:35:03.953 [2024-05-15 15:53:16.944054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.944189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.944245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.953 qpair failed and we were unable to recover it. 00:35:03.953 [2024-05-15 15:53:16.944364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.944527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.944552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.953 qpair failed and we were unable to recover it. 00:35:03.953 [2024-05-15 15:53:16.944671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.953 [2024-05-15 15:53:16.944774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.944799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.954 qpair failed and we were unable to recover it. 00:35:03.954 [2024-05-15 15:53:16.944942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.945075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.945100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.954 qpair failed and we were unable to recover it. 00:35:03.954 [2024-05-15 15:53:16.945228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.945339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.945364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.954 qpair failed and we were unable to recover it. 00:35:03.954 [2024-05-15 15:53:16.945502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.945638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.945663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.954 qpair failed and we were unable to recover it. 00:35:03.954 [2024-05-15 15:53:16.945797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.945914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.945938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.954 qpair failed and we were unable to recover it. 00:35:03.954 [2024-05-15 15:53:16.946063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.946194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.946225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.954 qpair failed and we were unable to recover it. 00:35:03.954 [2024-05-15 15:53:16.946363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.946474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.946498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.954 qpair failed and we were unable to recover it. 00:35:03.954 [2024-05-15 15:53:16.946610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.946746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.946770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.954 qpair failed and we were unable to recover it. 00:35:03.954 [2024-05-15 15:53:16.946888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.947025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.947049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.954 qpair failed and we were unable to recover it. 00:35:03.954 [2024-05-15 15:53:16.947195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.947338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.947363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.954 qpair failed and we were unable to recover it. 00:35:03.954 [2024-05-15 15:53:16.947505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.947624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.947649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.954 qpair failed and we were unable to recover it. 00:35:03.954 [2024-05-15 15:53:16.947790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.947910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.947935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.954 qpair failed and we were unable to recover it. 00:35:03.954 [2024-05-15 15:53:16.948057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.948175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.948200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.954 qpair failed and we were unable to recover it. 00:35:03.954 [2024-05-15 15:53:16.948352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.948460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.948484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.954 qpair failed and we were unable to recover it. 00:35:03.954 [2024-05-15 15:53:16.948598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.948713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.948738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.954 qpair failed and we were unable to recover it. 00:35:03.954 [2024-05-15 15:53:16.948883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.949022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.949047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.954 qpair failed and we were unable to recover it. 00:35:03.954 [2024-05-15 15:53:16.949188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.949301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.949327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.954 qpair failed and we were unable to recover it. 00:35:03.954 [2024-05-15 15:53:16.949461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.949569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.949593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.954 qpair failed and we were unable to recover it. 00:35:03.954 [2024-05-15 15:53:16.949715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.949859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.949883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.954 qpair failed and we were unable to recover it. 00:35:03.954 [2024-05-15 15:53:16.950001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.950109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.950134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.954 qpair failed and we were unable to recover it. 00:35:03.954 [2024-05-15 15:53:16.950268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.950441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.950466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.954 qpair failed and we were unable to recover it. 00:35:03.954 [2024-05-15 15:53:16.950630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.950770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.950796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.954 qpair failed and we were unable to recover it. 00:35:03.954 [2024-05-15 15:53:16.950929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.951068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.951093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.954 qpair failed and we were unable to recover it. 00:35:03.954 [2024-05-15 15:53:16.951204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.954 [2024-05-15 15:53:16.951323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.951348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.951466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.951613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.951638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.951748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.951859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.951883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.951990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.952133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.952158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.952274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.952390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.952414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.952529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.952653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.952676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.952795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.952915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.952939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.953081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.953211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.953240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.953354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.953493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.953518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.953632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.953798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.953823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.953938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.954054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.954080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.954188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.954306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.954333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.954441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.954549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.954573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.954739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.954852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.954878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.954987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.955147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.955172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.955291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.955424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.955449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.955580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.955743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.955767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.955875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.955996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.956020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.956131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.956270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.956295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.956407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.956514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.956539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.956676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.956785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.956809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.956946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.957063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.957089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.957224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.957391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.957416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.957537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.957672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.957696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.957808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.957959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.957984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.958144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.958270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.958295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.958468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.958587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.958611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.958723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.958830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.958854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.958975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.959085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.959109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.959265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.959381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.959405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.959518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.959634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.959658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.959775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.959907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.959931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.955 qpair failed and we were unable to recover it. 00:35:03.955 [2024-05-15 15:53:16.960051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.955 [2024-05-15 15:53:16.960159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.960183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.960294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.960402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.960428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.960569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.960707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.960732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.960842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.960961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.960985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.961120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.961282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.961308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.961429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.961566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.961591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.961731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.961850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.961876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.961991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.962128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.962152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.962262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.962397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.962422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.962561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.962704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.962729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.962847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.962963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.962987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.963098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.963244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.963269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.963385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.963547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.963571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.963687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.963807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.963832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.964004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.964146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.964171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.964287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.964402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.964427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.964535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.964648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.964673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.964819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.964954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.964978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.965115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.965230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.965254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.965364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.965474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.965498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.965644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.965785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.965809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.965945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.966050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.966075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.966178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.966313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.966338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.966478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.966608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.966633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.966743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.966890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.966915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.967022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.967130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.967155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.967274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.967392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.967417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.967548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.967684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.967709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.967861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.968004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.968029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.968146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.968292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.968317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.968460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.968571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.968594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.968727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.968829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.956 [2024-05-15 15:53:16.968853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.956 qpair failed and we were unable to recover it. 00:35:03.956 [2024-05-15 15:53:16.968997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.969106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.969130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.969245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.969361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.969385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.969524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.969657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.969687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.969833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.969972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.969997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.970118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.970232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.970258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.970367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.970484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.970510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.970648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.970761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.970785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.970902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.971033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.971058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.971202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.971318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.971342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.971456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.971562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.971587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.971725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.971857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.971882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.972001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.972116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.972141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.972261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.972374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.972402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.972544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.972658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.972683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.972818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.972956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.972980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.973116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.973238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.973263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.973437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.973543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.973567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.973708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.973825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.973850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.973965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.974132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.974156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.974289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.974420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.974445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.974564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.974671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.974696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.974843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.974945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.974969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.975101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.975210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.975269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.975385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.975494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.975518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.975685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.975802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.975826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.975971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.976083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.976107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.976248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.976358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.976382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.976521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.976699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.976724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.976827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.976939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.976964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.957 qpair failed and we were unable to recover it. 00:35:03.957 [2024-05-15 15:53:16.977087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.957 [2024-05-15 15:53:16.977201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.977232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.958 qpair failed and we were unable to recover it. 00:35:03.958 [2024-05-15 15:53:16.977369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.977484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.977508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.958 qpair failed and we were unable to recover it. 00:35:03.958 [2024-05-15 15:53:16.977626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.977764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.977789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.958 qpair failed and we were unable to recover it. 00:35:03.958 [2024-05-15 15:53:16.977894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.978027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.978056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.958 qpair failed and we were unable to recover it. 00:35:03.958 [2024-05-15 15:53:16.978201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.978320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.978345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.958 qpair failed and we were unable to recover it. 00:35:03.958 [2024-05-15 15:53:16.978455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.978593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.978617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.958 qpair failed and we were unable to recover it. 00:35:03.958 [2024-05-15 15:53:16.978754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.978861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.978885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.958 qpair failed and we were unable to recover it. 00:35:03.958 [2024-05-15 15:53:16.979057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.979188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.979213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.958 qpair failed and we were unable to recover it. 00:35:03.958 [2024-05-15 15:53:16.979350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.979459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.979484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.958 qpair failed and we were unable to recover it. 00:35:03.958 [2024-05-15 15:53:16.979619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.979730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.979755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.958 qpair failed and we were unable to recover it. 00:35:03.958 [2024-05-15 15:53:16.979924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.980036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.980062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.958 qpair failed and we were unable to recover it. 00:35:03.958 [2024-05-15 15:53:16.980177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.980293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.980318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.958 qpair failed and we were unable to recover it. 00:35:03.958 [2024-05-15 15:53:16.980442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.980544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.980568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.958 qpair failed and we were unable to recover it. 00:35:03.958 [2024-05-15 15:53:16.980715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.980827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.980851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.958 qpair failed and we were unable to recover it. 00:35:03.958 [2024-05-15 15:53:16.980969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.981102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.981127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.958 qpair failed and we were unable to recover it. 00:35:03.958 [2024-05-15 15:53:16.981241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.981360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.981385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.958 qpair failed and we were unable to recover it. 00:35:03.958 [2024-05-15 15:53:16.981541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.981657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.981682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.958 qpair failed and we were unable to recover it. 00:35:03.958 [2024-05-15 15:53:16.981801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.981980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.982004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.958 qpair failed and we were unable to recover it. 00:35:03.958 [2024-05-15 15:53:16.982121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.982238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.958 [2024-05-15 15:53:16.982262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.958 qpair failed and we were unable to recover it. 00:35:03.958 [2024-05-15 15:53:16.982403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.982508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.982532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.959 qpair failed and we were unable to recover it. 00:35:03.959 [2024-05-15 15:53:16.982670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.982782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.982807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.959 qpair failed and we were unable to recover it. 00:35:03.959 [2024-05-15 15:53:16.982946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.983059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.983083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.959 qpair failed and we were unable to recover it. 00:35:03.959 [2024-05-15 15:53:16.983223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.983328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.983353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.959 qpair failed and we were unable to recover it. 00:35:03.959 [2024-05-15 15:53:16.983465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.983606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.983631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.959 qpair failed and we were unable to recover it. 00:35:03.959 [2024-05-15 15:53:16.983742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.983887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.983913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.959 qpair failed and we were unable to recover it. 00:35:03.959 [2024-05-15 15:53:16.984077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.984195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.984250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.959 qpair failed and we were unable to recover it. 00:35:03.959 [2024-05-15 15:53:16.984366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.984533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.984557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.959 qpair failed and we were unable to recover it. 00:35:03.959 [2024-05-15 15:53:16.984687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.984798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.984822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.959 qpair failed and we were unable to recover it. 00:35:03.959 [2024-05-15 15:53:16.984939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.985075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.985100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.959 qpair failed and we were unable to recover it. 00:35:03.959 [2024-05-15 15:53:16.985201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.985315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.985339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.959 qpair failed and we were unable to recover it. 00:35:03.959 [2024-05-15 15:53:16.985482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.985596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.985620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.959 qpair failed and we were unable to recover it. 00:35:03.959 [2024-05-15 15:53:16.985737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.985880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.985905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.959 qpair failed and we were unable to recover it. 00:35:03.959 [2024-05-15 15:53:16.986015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.986126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.986151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.959 qpair failed and we were unable to recover it. 00:35:03.959 [2024-05-15 15:53:16.986299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.986398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.986421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.959 qpair failed and we were unable to recover it. 00:35:03.959 [2024-05-15 15:53:16.986571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.986681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.986707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.959 qpair failed and we were unable to recover it. 00:35:03.959 [2024-05-15 15:53:16.986821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.986961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.986986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.959 qpair failed and we were unable to recover it. 00:35:03.959 [2024-05-15 15:53:16.987127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.987244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.987270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.959 qpair failed and we were unable to recover it. 00:35:03.959 [2024-05-15 15:53:16.987415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.987525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.987551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.959 qpair failed and we were unable to recover it. 00:35:03.959 [2024-05-15 15:53:16.987691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.987832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.987856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.959 qpair failed and we were unable to recover it. 00:35:03.959 [2024-05-15 15:53:16.987965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.988072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.988097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.959 qpair failed and we were unable to recover it. 00:35:03.959 [2024-05-15 15:53:16.988240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.988375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.988400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.959 qpair failed and we were unable to recover it. 00:35:03.959 [2024-05-15 15:53:16.988540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.988645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.988670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.959 qpair failed and we were unable to recover it. 00:35:03.959 [2024-05-15 15:53:16.988777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.988918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.988942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.959 qpair failed and we were unable to recover it. 00:35:03.959 [2024-05-15 15:53:16.989075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.959 [2024-05-15 15:53:16.989191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.989227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:03.960 [2024-05-15 15:53:16.989350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.989468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.989491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:03.960 [2024-05-15 15:53:16.989624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.989764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.989789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:03.960 [2024-05-15 15:53:16.989943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.990057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.990082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:03.960 [2024-05-15 15:53:16.990189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.990320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.990345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:03.960 [2024-05-15 15:53:16.990462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.990604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.990630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:03.960 [2024-05-15 15:53:16.990746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.990910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.990934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:03.960 [2024-05-15 15:53:16.991040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.991146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.991172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:03.960 [2024-05-15 15:53:16.991337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.991458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.991482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:03.960 [2024-05-15 15:53:16.991628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.991736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.991761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:03.960 [2024-05-15 15:53:16.991869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.991982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.992007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:03.960 [2024-05-15 15:53:16.992160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.992269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.992294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:03.960 [2024-05-15 15:53:16.992437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.992573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.992597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:03.960 [2024-05-15 15:53:16.992780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.992883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.992907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:03.960 [2024-05-15 15:53:16.993024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.993161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.993185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:03.960 [2024-05-15 15:53:16.993300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.993436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.993460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:03.960 [2024-05-15 15:53:16.993579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.993714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.993738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:03.960 [2024-05-15 15:53:16.993853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.993978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.994003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:03.960 [2024-05-15 15:53:16.994122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.994240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.994265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:03.960 [2024-05-15 15:53:16.994378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.994491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.994517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:03.960 [2024-05-15 15:53:16.994639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.994776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.994801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:03.960 [2024-05-15 15:53:16.994913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.995053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.995078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:03.960 [2024-05-15 15:53:16.995220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.995365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.995389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:03.960 [2024-05-15 15:53:16.995574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.995680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.995705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:03.960 [2024-05-15 15:53:16.995825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.995927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.960 [2024-05-15 15:53:16.995951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:03.960 qpair failed and we were unable to recover it. 00:35:04.232 [2024-05-15 15:53:16.996066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.996178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.996204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:04.232 qpair failed and we were unable to recover it. 00:35:04.232 [2024-05-15 15:53:16.996323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.996445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.996470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:04.232 qpair failed and we were unable to recover it. 00:35:04.232 [2024-05-15 15:53:16.996650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.996788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.996812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:04.232 qpair failed and we were unable to recover it. 00:35:04.232 [2024-05-15 15:53:16.996928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.997038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.997063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:04.232 qpair failed and we were unable to recover it. 00:35:04.232 [2024-05-15 15:53:16.997176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.997290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.997314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:04.232 qpair failed and we were unable to recover it. 00:35:04.232 [2024-05-15 15:53:16.997430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.997553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.997577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:04.232 qpair failed and we were unable to recover it. 00:35:04.232 [2024-05-15 15:53:16.997725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.997847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.997871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:04.232 qpair failed and we were unable to recover it. 00:35:04.232 [2024-05-15 15:53:16.997972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.998151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.998174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:04.232 qpair failed and we were unable to recover it. 00:35:04.232 [2024-05-15 15:53:16.998285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.998431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.998455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:04.232 qpair failed and we were unable to recover it. 00:35:04.232 [2024-05-15 15:53:16.998567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.998682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.998705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:04.232 qpair failed and we were unable to recover it. 00:35:04.232 [2024-05-15 15:53:16.998833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.998940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.998963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:04.232 qpair failed and we were unable to recover it. 00:35:04.232 [2024-05-15 15:53:16.999076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.999188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.999212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:04.232 qpair failed and we were unable to recover it. 00:35:04.232 [2024-05-15 15:53:16.999341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.999460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.999484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:04.232 qpair failed and we were unable to recover it. 00:35:04.232 [2024-05-15 15:53:16.999598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.999719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.999743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:04.232 qpair failed and we were unable to recover it. 00:35:04.232 [2024-05-15 15:53:16.999873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:16.999988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:17.000012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:04.232 qpair failed and we were unable to recover it. 00:35:04.232 [2024-05-15 15:53:17.000132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:17.000249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:17.000275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1448000b90 with addr=10.0.0.2, port=4420 00:35:04.232 qpair failed and we were unable to recover it. 00:35:04.232 [2024-05-15 15:53:17.000413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:17.000579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.232 [2024-05-15 15:53:17.000609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.232 qpair failed and we were unable to recover it. 00:35:04.232 [2024-05-15 15:53:17.000818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.000933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.000958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.001081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.001192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.001225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.001343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.001463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.001489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.001632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.001769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.001794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.001925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.002084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.002109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.002212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.002352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.002377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.002491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.002594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.002619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.002755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.002870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.002895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.002996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.003101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.003125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.003245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.003377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.003402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.003515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.003688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.003713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.003827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.003966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.003990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.004104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.004223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.004248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.004366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.004499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.004524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.004634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.004742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.004767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.004876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.004988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.005013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.005157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.005264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.005289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.005412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.005532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.005556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.005669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.005777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.005801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.005927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.006065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.006094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.006240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.006358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.006383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.006498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.006607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.006631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.006801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.006914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.006938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.007057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.007228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.007252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.007356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.007494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.007519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.007627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.007768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.007792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.007927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.008030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.008054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.008241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.008341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.008366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.008489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.008625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.008649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.233 [2024-05-15 15:53:17.008789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.008894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.233 [2024-05-15 15:53:17.008918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.233 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.009039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.009177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.009201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.009325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.009436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.009461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.009596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.009734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.009759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.009871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.009977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.010002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.010118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.010230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.010255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.010419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.010521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.010545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.010661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.010768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.010792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.010904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.011011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.011035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.011170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.011275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.011300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.011417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.011531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.011556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.011670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.011782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.011807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.012011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.012145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.012169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.012289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.012411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.012437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.012576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.012682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.012707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.012823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.012962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.012986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.013157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.013271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.013295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.013409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.013551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.013576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.013719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.013823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.013847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.013959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.014063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.014087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.014247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.014378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.014402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.014523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.014640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.014665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.014773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.014907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.014931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.015076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.015192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.015231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.015369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.015477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.015501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.015623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.015727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.015752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.015896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.016061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.016085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.016201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.016357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.016382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.016501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.016612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.016636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.016751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.016857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.234 [2024-05-15 15:53:17.016882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.234 qpair failed and we were unable to recover it. 00:35:04.234 [2024-05-15 15:53:17.017021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.017145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.017169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.017302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.017418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.017443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.017560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.017730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.017755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.017869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.017977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.018001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.018113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.018241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.018267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.018373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.018525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.018550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.018656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.018770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.018795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.018904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.019028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.019052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.019159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.019305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.019330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.019444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.019557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.019581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.019723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.019867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.019891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.020005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.020149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.020178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.020318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.020431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.020455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.020577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.020686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.020710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.020821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.020924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.020948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.021062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.021172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.021197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.021317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.021441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.021465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.021630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.021757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.021782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.021893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.021999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.022023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.022158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.022303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.022329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.022439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.022549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.022574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.022712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.022822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.022847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.022968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.023131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.023155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.023283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.023401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.023426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.023603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.023732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.023756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.023884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.024000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.024024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.024168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.024278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.024303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.024409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.024544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.024568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.024681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.024793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.024819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.235 [2024-05-15 15:53:17.024929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.025033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.235 [2024-05-15 15:53:17.025057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.235 qpair failed and we were unable to recover it. 00:35:04.236 [2024-05-15 15:53:17.025174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.025294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.025319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.236 qpair failed and we were unable to recover it. 00:35:04.236 [2024-05-15 15:53:17.025467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.025607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.025631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.236 qpair failed and we were unable to recover it. 00:35:04.236 [2024-05-15 15:53:17.025769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.025875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.025899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.236 qpair failed and we were unable to recover it. 00:35:04.236 [2024-05-15 15:53:17.026014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.026119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.026143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.236 qpair failed and we were unable to recover it. 00:35:04.236 [2024-05-15 15:53:17.026253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.026392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.026417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.236 qpair failed and we were unable to recover it. 00:35:04.236 [2024-05-15 15:53:17.026534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.026652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.026676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.236 qpair failed and we were unable to recover it. 00:35:04.236 [2024-05-15 15:53:17.026792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.026931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.026955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.236 qpair failed and we were unable to recover it. 00:35:04.236 [2024-05-15 15:53:17.027103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.027246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.027271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.236 qpair failed and we were unable to recover it. 00:35:04.236 [2024-05-15 15:53:17.027383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.027491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.027515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.236 qpair failed and we were unable to recover it. 00:35:04.236 [2024-05-15 15:53:17.027618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.027753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.027777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.236 qpair failed and we were unable to recover it. 00:35:04.236 [2024-05-15 15:53:17.027887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.027992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.028016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.236 qpair failed and we were unable to recover it. 00:35:04.236 [2024-05-15 15:53:17.028131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.028268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.028292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.236 qpair failed and we were unable to recover it. 00:35:04.236 [2024-05-15 15:53:17.028431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.028542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.028567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.236 qpair failed and we were unable to recover it. 00:35:04.236 [2024-05-15 15:53:17.028682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.028814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.028838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.236 qpair failed and we were unable to recover it. 00:35:04.236 [2024-05-15 15:53:17.028952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.029089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.029113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.236 qpair failed and we were unable to recover it. 00:35:04.236 [2024-05-15 15:53:17.029257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.029367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.029392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.236 qpair failed and we were unable to recover it. 00:35:04.236 [2024-05-15 15:53:17.029510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.029643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.029668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.236 qpair failed and we were unable to recover it. 00:35:04.236 [2024-05-15 15:53:17.029779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.029921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.029946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.236 qpair failed and we were unable to recover it. 00:35:04.236 [2024-05-15 15:53:17.030054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.030185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.030210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.236 qpair failed and we were unable to recover it. 00:35:04.236 [2024-05-15 15:53:17.030337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.030455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.030479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.236 qpair failed and we were unable to recover it. 00:35:04.236 [2024-05-15 15:53:17.030581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.030696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.030721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.236 qpair failed and we were unable to recover it. 00:35:04.236 [2024-05-15 15:53:17.030838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.030974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.030998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.236 qpair failed and we were unable to recover it. 00:35:04.236 [2024-05-15 15:53:17.031106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.031270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.236 [2024-05-15 15:53:17.031323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.237 qpair failed and we were unable to recover it. 00:35:04.237 [2024-05-15 15:53:17.031436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.237 [2024-05-15 15:53:17.031552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.237 [2024-05-15 15:53:17.031576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.237 qpair failed and we were unable to recover it. 00:35:04.237 [2024-05-15 15:53:17.031715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.237 [2024-05-15 15:53:17.031830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.237 [2024-05-15 15:53:17.031855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.237 qpair failed and we were unable to recover it. 00:35:04.237 [2024-05-15 15:53:17.031973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.237 [2024-05-15 15:53:17.032077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.237 [2024-05-15 15:53:17.032101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.237 qpair failed and we were unable to recover it. 00:35:04.237 [2024-05-15 15:53:17.032231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.237 [2024-05-15 15:53:17.032346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.237 [2024-05-15 15:53:17.032372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.237 qpair failed and we were unable to recover it. 00:35:04.237 [2024-05-15 15:53:17.032481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.237 [2024-05-15 15:53:17.032621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.237 [2024-05-15 15:53:17.032647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.237 qpair failed and we were unable to recover it. 00:35:04.237 [2024-05-15 15:53:17.032755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.237 [2024-05-15 15:53:17.032866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.237 [2024-05-15 15:53:17.032891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.237 qpair failed and we were unable to recover it. 00:35:04.237 [2024-05-15 15:53:17.033025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.237 [2024-05-15 15:53:17.033127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.237 [2024-05-15 15:53:17.033151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.237 qpair failed and we were unable to recover it. 00:35:04.237 [2024-05-15 15:53:17.033266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.237 [2024-05-15 15:53:17.033426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.237 [2024-05-15 15:53:17.033450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.237 qpair failed and we were unable to recover it. 00:35:04.237 [2024-05-15 15:53:17.033565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.033675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.033701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.033805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.033971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.033996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.034140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.034275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.034300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.034415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.034532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.034556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.034668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.034806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.034830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.034965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.035102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.035126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.035245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.035361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.035386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.035489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.035617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.035641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.035745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.035907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.035932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.036076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.036224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.036248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.036352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.036459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.036483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.036591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.036703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.036727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.036868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.036976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.037001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.037106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.037264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.037289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.037405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.037520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.037545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.037656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.037763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.037789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.037933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.038069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.038094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.038199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.038316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.038341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.038462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.038595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.038620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.038729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.038834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.038858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.038961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.039069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.039094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.039202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.039328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.039353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.039513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.039641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.039670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.039788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.039933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.039959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.040077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.040226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.040252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.040367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.040480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.040505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.040608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.040727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.040752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.040864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.040969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.040994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1440000b90 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.041122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.041229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.041254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.238 [2024-05-15 15:53:17.041371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.041477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.238 [2024-05-15 15:53:17.041501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.238 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.041621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.041779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.041803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.041924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.042037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.042062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.042169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.042288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.042314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.042430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.042567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.042592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.042701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.042810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.042835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.042945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.043045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.043069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.043190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.043333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.043358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.043492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.043594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.043619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.043724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.043854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.043879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.043992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.044153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.044178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.044354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.044493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.044517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.044626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.044766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.044791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.044900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.045061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.045090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.045200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.045337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.045363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.045476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.045621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.045646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.045790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.045904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.045929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.046044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.046174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.046199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.046313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.046422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.046446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.046568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.046674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.046698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.046836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.046957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.046981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.047116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.047241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.047266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.047386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.047524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.047548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.047656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.047794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.047819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.047969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.048078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.048102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.048231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.048352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.048376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.048511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.048619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.048643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.048761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.048866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.048890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.049004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.049123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.049147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.049277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.049394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.049418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.239 [2024-05-15 15:53:17.049527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.049649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.239 [2024-05-15 15:53:17.049673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.239 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.049813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.049947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.049971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.050109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.050257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.050282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.050413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.050524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.050549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.050719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.050851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.050875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.051016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.051119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.051144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.051274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.051382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.051407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.051524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.051634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.051660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.051773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.051887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.051913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.052049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.052154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.052178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.052301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.052417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.052441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.052549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.052659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.052683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.052799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.052917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.052941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.053056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.053158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.053183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.053331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.053464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.053488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.053591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.053697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.053721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.053839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.053973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.053997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.054120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.054235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.054261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.054377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.054483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.054508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.054629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.054763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.054787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.054905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.055037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.055062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.055168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.055278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.055303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.055446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.055553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.055578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.055690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.055795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.055820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.055931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.056041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.056066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.056189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.056304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.056329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.056443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.056551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.056576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.056700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.056819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.056845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.056953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.057072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.057097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.057207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.057342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.057367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.240 [2024-05-15 15:53:17.057475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.057592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.240 [2024-05-15 15:53:17.057619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.240 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.057730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.057846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.057870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.057988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.058109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.058134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.058246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.058382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.058414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.058552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.058671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.058700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.058849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.058981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.059006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.059120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.059228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.059253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.059389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.059494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.059519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.059660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.059799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.059824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.059938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.060057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.060082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.060201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.060328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.060353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.060473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.060592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.060617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.060750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.060857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.060882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.060997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.061134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.061158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.061279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.061401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.061426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.061546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.061689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.061713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.061824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.061937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.061961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.062083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.062222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.062247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.062354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.062493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.062517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.062636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.062742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.062767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.062881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.062996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.063021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.063127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.063346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.063375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.063500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.063609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.063635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.063782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.063897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.063924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.064036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.064159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.064183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.064335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.064451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.064476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.064577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.064679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.064704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.064826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.064947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.064972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.065115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.065245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.065272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.065413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.065522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.065547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.241 qpair failed and we were unable to recover it. 00:35:04.241 [2024-05-15 15:53:17.065660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.065791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.241 [2024-05-15 15:53:17.065816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.242 [2024-05-15 15:53:17.065943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.066044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.066069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.242 [2024-05-15 15:53:17.066205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.066343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.066368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.242 [2024-05-15 15:53:17.066475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.066578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.066603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.242 [2024-05-15 15:53:17.066754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.066866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.066891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.242 [2024-05-15 15:53:17.066996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.067110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.067135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.242 [2024-05-15 15:53:17.067253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.067369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.067394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.242 [2024-05-15 15:53:17.067516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.067622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.067647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.242 [2024-05-15 15:53:17.067758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.067887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.067910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.242 [2024-05-15 15:53:17.068019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.068131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.068154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.242 [2024-05-15 15:53:17.068272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.068381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.068407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.242 [2024-05-15 15:53:17.068518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.068655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.068680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.242 [2024-05-15 15:53:17.068786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.068902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.068927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.242 [2024-05-15 15:53:17.069047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.069149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.069174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.242 [2024-05-15 15:53:17.069287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.069401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.069425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.242 [2024-05-15 15:53:17.069548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.069669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.069695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.242 [2024-05-15 15:53:17.069871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.069992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.070016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.242 [2024-05-15 15:53:17.070125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.070229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.070254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.242 [2024-05-15 15:53:17.070379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.070484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.070508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.242 [2024-05-15 15:53:17.070622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.070731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.070756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.242 [2024-05-15 15:53:17.070876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.070981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.071005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.242 [2024-05-15 15:53:17.071122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.071268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.071293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.242 [2024-05-15 15:53:17.071440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.071557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.071583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.242 [2024-05-15 15:53:17.071688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.071794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.071818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.242 [2024-05-15 15:53:17.071924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.072039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.242 [2024-05-15 15:53:17.072065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.242 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.072205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.072324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.072353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.072461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.072568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.072593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.072735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:35:04.243 [2024-05-15 15:53:17.072873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.072898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:04.243 [2024-05-15 15:53:17.073019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:04.243 [2024-05-15 15:53:17.073136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.073161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:04.243 [2024-05-15 15:53:17.073271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.073388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.073413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.073560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.073705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.073730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.073840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.073983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.074008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.074138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.074280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.074307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.074423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.074536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.074563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.074724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.074835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.074861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.074978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.075094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.075120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.075239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.075353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.075379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.075518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.075626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.075651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.075764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.075867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.075892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.076001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.076138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.076163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.076287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.076421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.076446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.076560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.076670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.076694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.076814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.076922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.076947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.077054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.077162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.077187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.077309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.077421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.077450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.077561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.077677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.077702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.077807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.077940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.077965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.078106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.078226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.078251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.078374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.078487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.078513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.078625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.078761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.078787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.078927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.079047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.079072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.079181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.079308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.079334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.079455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.079564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.079588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.243 qpair failed and we were unable to recover it. 00:35:04.243 [2024-05-15 15:53:17.079699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.243 [2024-05-15 15:53:17.079808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.079832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.079959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.080102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.080128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.080275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.080381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.080406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.080519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.080677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.080701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.080811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.080912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.080937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.081099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.081224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.081249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.081370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.081485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.081510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.081654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.081786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.081810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.081949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.082065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.082091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.082223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.082331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.082356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.082471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.082577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.082601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.082742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.082854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.082878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.083018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.083128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.083153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.083268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.083381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.083406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.083510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.083622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.083648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.083767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.083881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.083907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.084023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.084127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.084151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.084289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.084395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.084420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.084536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.084671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.084695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.084841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.084951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.084975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.085082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.085211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.085242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.085357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.085474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.085498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.085615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.085733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.085759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.085871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.086003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.086028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.086148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.086282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.086307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.086422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.086546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.086570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.086689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.086828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.086853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.086967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.087085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.087110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.087220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.087368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.244 [2024-05-15 15:53:17.087404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.244 qpair failed and we were unable to recover it. 00:35:04.244 [2024-05-15 15:53:17.087513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.087627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.087653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.087772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.087907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.087931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.088099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.088240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.088266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.088374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.088482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.088511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.088623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.088736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.088762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.088878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.088996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.089021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.089127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.089237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.089262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.089398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.089539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.089563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.089675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.089817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.089843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.089962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.090111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.090136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.090249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.090365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.090390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.090528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.090635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.090660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.090789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.090901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.090925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.091041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.091149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.091179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.091348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.091489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.091514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.091649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.091765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.091797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.091929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.092035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.092060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.092176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.092297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.092322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.092444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.092554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.092579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.092682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.092817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.092841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.092964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.093101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.093126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.093265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:04.245 [2024-05-15 15:53:17.093399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.093425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:04.245 [2024-05-15 15:53:17.093579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.245 [2024-05-15 15:53:17.093692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.093719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:04.245 [2024-05-15 15:53:17.093867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.093978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.094003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.094135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.094250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.094275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.094399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.094532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.094557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.094697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.094827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.094853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.094992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.095097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.095121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.245 qpair failed and we were unable to recover it. 00:35:04.245 [2024-05-15 15:53:17.095254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.245 [2024-05-15 15:53:17.095366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.095391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.246 qpair failed and we were unable to recover it. 00:35:04.246 [2024-05-15 15:53:17.095500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.095604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.095628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.246 qpair failed and we were unable to recover it. 00:35:04.246 [2024-05-15 15:53:17.095766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.095883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.095907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.246 qpair failed and we were unable to recover it. 00:35:04.246 [2024-05-15 15:53:17.096012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.096128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.096153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.246 qpair failed and we were unable to recover it. 00:35:04.246 [2024-05-15 15:53:17.096277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.096391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.096422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.246 qpair failed and we were unable to recover it. 00:35:04.246 [2024-05-15 15:53:17.096532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.096666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.096690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.246 qpair failed and we were unable to recover it. 00:35:04.246 [2024-05-15 15:53:17.096797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.096900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.096925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.246 qpair failed and we were unable to recover it. 00:35:04.246 [2024-05-15 15:53:17.097046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.097189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.097214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.246 qpair failed and we were unable to recover it. 00:35:04.246 [2024-05-15 15:53:17.097360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.097466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.097490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.246 qpair failed and we were unable to recover it. 00:35:04.246 [2024-05-15 15:53:17.097622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.097764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.097790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.246 qpair failed and we were unable to recover it. 00:35:04.246 [2024-05-15 15:53:17.097901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.098002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.098027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.246 qpair failed and we were unable to recover it. 00:35:04.246 [2024-05-15 15:53:17.098189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.098308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.098333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.246 qpair failed and we were unable to recover it. 00:35:04.246 [2024-05-15 15:53:17.098444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.098543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.098567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.246 qpair failed and we were unable to recover it. 00:35:04.246 [2024-05-15 15:53:17.098750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.098916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.098941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.246 qpair failed and we were unable to recover it. 00:35:04.246 [2024-05-15 15:53:17.099059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.099192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.099222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.246 qpair failed and we were unable to recover it. 00:35:04.246 [2024-05-15 15:53:17.099346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.099453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.099477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.246 qpair failed and we were unable to recover it. 00:35:04.246 [2024-05-15 15:53:17.099622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.099734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.099758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.246 qpair failed and we were unable to recover it. 00:35:04.246 [2024-05-15 15:53:17.099868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.099984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.100009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.246 qpair failed and we were unable to recover it. 00:35:04.246 [2024-05-15 15:53:17.100147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.100261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.100287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.246 qpair failed and we were unable to recover it. 00:35:04.246 [2024-05-15 15:53:17.100431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.100547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.100573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.246 qpair failed and we were unable to recover it. 00:35:04.246 [2024-05-15 15:53:17.100680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.100813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.100837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.246 qpair failed and we were unable to recover it. 00:35:04.246 [2024-05-15 15:53:17.100954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.246 [2024-05-15 15:53:17.101076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.101101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.101251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.101363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.101388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.101505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.101644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.101669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.101778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.101913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.101938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.102085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.102224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.102249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.102368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.102475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.102501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.102663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.102790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.102814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.102956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.103067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.103091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.103251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.103388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.103413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.103530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.103638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.103663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.103782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.103928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.103952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.104055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.104171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.104195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.104324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.104441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.104465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.104582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.104694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.104728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.104850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.105018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.105042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.105159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.105283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.105308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.105424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.105558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.105583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.105725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.105843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.105867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.105974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.106137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.106162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.106310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.106420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.106444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.106579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.106720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.106744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.106892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.107031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.107056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.107193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.107383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.107408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.107517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.107635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.107659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.107781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.107892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.107921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.108059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.108226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.108251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.108363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.108471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.108496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.108613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.108772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.108797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.108917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.109023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.109048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.109154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.109278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.109304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.247 qpair failed and we were unable to recover it. 00:35:04.247 [2024-05-15 15:53:17.109415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.247 [2024-05-15 15:53:17.109537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.109562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 [2024-05-15 15:53:17.109707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.109877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.109902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 [2024-05-15 15:53:17.110008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.110147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.110172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 [2024-05-15 15:53:17.110311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.110422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.110448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 [2024-05-15 15:53:17.110568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.110675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.110699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 [2024-05-15 15:53:17.110894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.110995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.111029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 [2024-05-15 15:53:17.111172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.111314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.111339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 [2024-05-15 15:53:17.111452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.111588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.111612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 [2024-05-15 15:53:17.111755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.111863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.111887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 [2024-05-15 15:53:17.112027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.112143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.112168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 [2024-05-15 15:53:17.112292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.112402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.112427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 [2024-05-15 15:53:17.112567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.112705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.112730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 [2024-05-15 15:53:17.112838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.112972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.112997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 [2024-05-15 15:53:17.113138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.113304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.113329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 [2024-05-15 15:53:17.113459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.113565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.113589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 [2024-05-15 15:53:17.113739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.113878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.113903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 [2024-05-15 15:53:17.114041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.114173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.114198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 [2024-05-15 15:53:17.114328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.114463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.114488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 [2024-05-15 15:53:17.114625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.114749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.114773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 [2024-05-15 15:53:17.114890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.115025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.115050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 [2024-05-15 15:53:17.115188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.115320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.115345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 [2024-05-15 15:53:17.115483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.115627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.115652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 [2024-05-15 15:53:17.115788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.115955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.115979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 [2024-05-15 15:53:17.116119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.116245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.116270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 Malloc0 00:35:04.248 [2024-05-15 15:53:17.116387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.116524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.116549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.248 [2024-05-15 15:53:17.116692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.116802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.116826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:04.248 [2024-05-15 15:53:17.116958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.248 [2024-05-15 15:53:17.117068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.117093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.248 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:04.248 [2024-05-15 15:53:17.117196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.117313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.248 [2024-05-15 15:53:17.117338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.248 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.117460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.117603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.117630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.117736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.117838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.117862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.117964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.118066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.118091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.118207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.118354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.118379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.118519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.118643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.118667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.118778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.118895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.118919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.119042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.119182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.119206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.119345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.119463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.119488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.119599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.119730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.119755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.119876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.120010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.120004] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:04.249 [2024-05-15 15:53:17.120035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.120149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.120284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.120309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.120425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.120530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.120554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.120682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.120804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.120828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.120933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.121072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.121096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.121223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.121358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.121382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.121498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.121610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.121637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.121769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.121905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.121930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.122042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.122143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.122168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.122293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.122428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.122453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.122561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.122677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.122701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.122842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.122945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.122969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.123110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.123223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.123248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.123368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.123480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.123505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.123634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.123741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.123765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.123877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.124013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.124038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.124156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.124277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.124302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.124436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.124586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.124612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.249 qpair failed and we were unable to recover it. 00:35:04.249 [2024-05-15 15:53:17.124733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.249 [2024-05-15 15:53:17.124850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.124875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 [2024-05-15 15:53:17.124986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.125102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.125128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 [2024-05-15 15:53:17.125271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.125384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.125408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 [2024-05-15 15:53:17.125521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.125631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.125656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 [2024-05-15 15:53:17.125789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.125925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.125949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 [2024-05-15 15:53:17.126081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.126195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.126225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 [2024-05-15 15:53:17.126367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.126482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.126508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 [2024-05-15 15:53:17.126674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.126787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.126811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 [2024-05-15 15:53:17.126929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.127071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.127095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 [2024-05-15 15:53:17.127212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.127347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.127376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 [2024-05-15 15:53:17.127512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.127621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.127646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 [2024-05-15 15:53:17.127756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.127871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.127897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 [2024-05-15 15:53:17.128016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.128130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.128155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.250 [2024-05-15 15:53:17.128290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:04.250 [2024-05-15 15:53:17.128426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.128451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.250 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:04.250 [2024-05-15 15:53:17.128591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.128722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.128746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 [2024-05-15 15:53:17.128864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.128969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.128994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 [2024-05-15 15:53:17.129111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.129230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.129256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 [2024-05-15 15:53:17.129388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.129501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.129526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 [2024-05-15 15:53:17.129645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.129764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.129790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 [2024-05-15 15:53:17.129900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.130006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.130031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 [2024-05-15 15:53:17.130133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.130249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.130274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 [2024-05-15 15:53:17.130423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.130530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.130555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 [2024-05-15 15:53:17.130694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.130803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.130828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 [2024-05-15 15:53:17.130962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.131069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.131095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 [2024-05-15 15:53:17.131237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.131349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.131373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 [2024-05-15 15:53:17.131474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.131592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.131616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.250 qpair failed and we were unable to recover it. 00:35:04.250 [2024-05-15 15:53:17.131725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.250 [2024-05-15 15:53:17.131837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.131861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.251 qpair failed and we were unable to recover it. 00:35:04.251 [2024-05-15 15:53:17.132026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.132149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.132174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.251 qpair failed and we were unable to recover it. 00:35:04.251 [2024-05-15 15:53:17.132301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.132417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.132448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.251 qpair failed and we were unable to recover it. 00:35:04.251 [2024-05-15 15:53:17.132585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.132703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.132728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.251 qpair failed and we were unable to recover it. 00:35:04.251 [2024-05-15 15:53:17.132863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.132963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.132988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.251 qpair failed and we were unable to recover it. 00:35:04.251 [2024-05-15 15:53:17.133131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.133241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.133266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.251 qpair failed and we were unable to recover it. 00:35:04.251 [2024-05-15 15:53:17.133374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.133505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.133530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.251 qpair failed and we were unable to recover it. 00:35:04.251 [2024-05-15 15:53:17.133666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.133794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.133818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.251 qpair failed and we were unable to recover it. 00:35:04.251 [2024-05-15 15:53:17.133928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.134045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.134069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.251 qpair failed and we were unable to recover it. 00:35:04.251 [2024-05-15 15:53:17.134177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.134291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.134316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.251 qpair failed and we were unable to recover it. 00:35:04.251 [2024-05-15 15:53:17.134437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.134541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.134565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.251 qpair failed and we were unable to recover it. 00:35:04.251 [2024-05-15 15:53:17.134687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.134822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.134846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.251 qpair failed and we were unable to recover it. 00:35:04.251 [2024-05-15 15:53:17.134977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.135089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.135113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.251 qpair failed and we were unable to recover it. 00:35:04.251 [2024-05-15 15:53:17.135271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.135402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.135426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.251 qpair failed and we were unable to recover it. 00:35:04.251 [2024-05-15 15:53:17.135591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.135699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.135723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.251 qpair failed and we were unable to recover it. 00:35:04.251 [2024-05-15 15:53:17.135857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.135973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.135997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.251 qpair failed and we were unable to recover it. 00:35:04.251 [2024-05-15 15:53:17.136133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.251 [2024-05-15 15:53:17.136274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.136299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.251 qpair failed and we were unable to recover it. 00:35:04.251 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:04.251 [2024-05-15 15:53:17.136415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.251 [2024-05-15 15:53:17.136557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.136582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.251 qpair failed and we were unable to recover it. 00:35:04.251 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:04.251 [2024-05-15 15:53:17.136690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.136820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.136845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.251 qpair failed and we were unable to recover it. 00:35:04.251 [2024-05-15 15:53:17.136958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.137066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.137091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.251 qpair failed and we were unable to recover it. 00:35:04.251 [2024-05-15 15:53:17.137254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.137363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.137388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.251 qpair failed and we were unable to recover it. 00:35:04.251 [2024-05-15 15:53:17.137505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.137611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.137636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.251 qpair failed and we were unable to recover it. 00:35:04.251 [2024-05-15 15:53:17.137756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.137868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.137893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.251 qpair failed and we were unable to recover it. 00:35:04.251 [2024-05-15 15:53:17.138033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.138140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.138166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.251 qpair failed and we were unable to recover it. 00:35:04.251 [2024-05-15 15:53:17.138296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.251 [2024-05-15 15:53:17.138405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.138430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.138541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.138652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.138678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.138830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.138957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.138981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.139083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.139224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.139250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.139392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.139497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.139521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.139626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.139727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.139751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.139856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.139964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.139988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.140102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.140213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.140261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.140394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.140560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.140585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.140729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.140847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.140872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.140983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.141098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.141123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.141244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.141384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.141409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.141515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.141626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.141651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.141758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.141868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.141892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.142028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.142165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.142191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.142317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.142423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.142448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.142560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.142694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.142718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.142834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.142941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.142965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.143114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.143253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.143288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.143407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.143559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.143585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.143725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.143858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.143883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.144029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.144139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.144164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.252 [2024-05-15 15:53:17.144277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.144399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:04.252 [2024-05-15 15:53:17.144423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.252 [2024-05-15 15:53:17.144546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:04.252 [2024-05-15 15:53:17.144677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.144702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.144818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.144920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.144944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.145074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.145210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.145242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.145389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.145495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.145519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.145632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.145736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.145761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.252 [2024-05-15 15:53:17.145881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.146026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.252 [2024-05-15 15:53:17.146051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.252 qpair failed and we were unable to recover it. 00:35:04.253 [2024-05-15 15:53:17.146165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.253 [2024-05-15 15:53:17.146328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.253 [2024-05-15 15:53:17.146353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.253 qpair failed and we were unable to recover it. 00:35:04.253 [2024-05-15 15:53:17.146490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.253 [2024-05-15 15:53:17.146599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.253 [2024-05-15 15:53:17.146625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.253 qpair failed and we were unable to recover it. 00:35:04.253 [2024-05-15 15:53:17.146754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.253 [2024-05-15 15:53:17.146859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.253 [2024-05-15 15:53:17.146884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.253 qpair failed and we were unable to recover it. 00:35:04.253 [2024-05-15 15:53:17.147007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.253 [2024-05-15 15:53:17.147145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.253 [2024-05-15 15:53:17.147171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.253 qpair failed and we were unable to recover it. 00:35:04.253 [2024-05-15 15:53:17.147303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.253 [2024-05-15 15:53:17.147423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.253 [2024-05-15 15:53:17.147447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.253 qpair failed and we were unable to recover it. 00:35:04.253 [2024-05-15 15:53:17.147554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.253 [2024-05-15 15:53:17.147655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.253 [2024-05-15 15:53:17.147680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.253 qpair failed and we were unable to recover it. 00:35:04.253 [2024-05-15 15:53:17.147791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.253 [2024-05-15 15:53:17.147892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.253 [2024-05-15 15:53:17.147917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.253 qpair failed and we were unable to recover it. 00:35:04.253 [2024-05-15 15:53:17.148006] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:35:04.253 [2024-05-15 15:53:17.148038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.253 [2024-05-15 15:53:17.148180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.253 [2024-05-15 15:53:17.148208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23e50 with addr=10.0.0.2, port=4420 00:35:04.253 qpair failed and we were unable to recover it. 00:35:04.253 [2024-05-15 15:53:17.148286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:04.253 [2024-05-15 15:53:17.150854] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.253 [2024-05-15 15:53:17.150996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.253 [2024-05-15 15:53:17.151026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.253 [2024-05-15 15:53:17.151042] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.253 [2024-05-15 15:53:17.151055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.253 [2024-05-15 15:53:17.151090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.253 qpair failed and we were unable to recover it. 00:35:04.253 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.253 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:04.253 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.253 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:04.253 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.253 15:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1479827 00:35:04.253 [2024-05-15 15:53:17.160610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.253 [2024-05-15 15:53:17.160725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.253 [2024-05-15 15:53:17.160753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.253 [2024-05-15 15:53:17.160768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.253 [2024-05-15 15:53:17.160781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.253 [2024-05-15 15:53:17.160809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.253 qpair failed and we were unable to recover it. 00:35:04.253 [2024-05-15 15:53:17.170660] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.253 [2024-05-15 15:53:17.170786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.253 [2024-05-15 15:53:17.170814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.253 [2024-05-15 15:53:17.170828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.253 [2024-05-15 15:53:17.170841] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.253 [2024-05-15 15:53:17.170869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.253 qpair failed and we were unable to recover it. 00:35:04.253 [2024-05-15 15:53:17.180577] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.253 [2024-05-15 15:53:17.180695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.253 [2024-05-15 15:53:17.180726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.253 [2024-05-15 15:53:17.180742] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.253 [2024-05-15 15:53:17.180755] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.253 [2024-05-15 15:53:17.180783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.253 qpair failed and we were unable to recover it. 00:35:04.253 [2024-05-15 15:53:17.190642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.253 [2024-05-15 15:53:17.190763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.253 [2024-05-15 15:53:17.190790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.253 [2024-05-15 15:53:17.190805] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.253 [2024-05-15 15:53:17.190818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.253 [2024-05-15 15:53:17.190846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.253 qpair failed and we were unable to recover it. 00:35:04.253 [2024-05-15 15:53:17.200645] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.253 [2024-05-15 15:53:17.200758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.253 [2024-05-15 15:53:17.200784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.253 [2024-05-15 15:53:17.200799] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.253 [2024-05-15 15:53:17.200812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.253 [2024-05-15 15:53:17.200840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.253 qpair failed and we were unable to recover it. 00:35:04.253 [2024-05-15 15:53:17.210673] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.253 [2024-05-15 15:53:17.210806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.253 [2024-05-15 15:53:17.210833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.253 [2024-05-15 15:53:17.210847] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.253 [2024-05-15 15:53:17.210859] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.253 [2024-05-15 15:53:17.210887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.253 qpair failed and we were unable to recover it. 00:35:04.253 [2024-05-15 15:53:17.220661] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.253 [2024-05-15 15:53:17.220784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.253 [2024-05-15 15:53:17.220811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.253 [2024-05-15 15:53:17.220826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.253 [2024-05-15 15:53:17.220838] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.253 [2024-05-15 15:53:17.220871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.253 qpair failed and we were unable to recover it. 00:35:04.253 [2024-05-15 15:53:17.230749] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.253 [2024-05-15 15:53:17.230865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.253 [2024-05-15 15:53:17.230892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.253 [2024-05-15 15:53:17.230907] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.253 [2024-05-15 15:53:17.230920] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.253 [2024-05-15 15:53:17.230948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.253 qpair failed and we were unable to recover it. 00:35:04.253 [2024-05-15 15:53:17.240736] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.254 [2024-05-15 15:53:17.240858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.254 [2024-05-15 15:53:17.240885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.254 [2024-05-15 15:53:17.240899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.254 [2024-05-15 15:53:17.240914] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.254 [2024-05-15 15:53:17.240942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.254 qpair failed and we were unable to recover it. 00:35:04.254 [2024-05-15 15:53:17.250746] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.254 [2024-05-15 15:53:17.250859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.254 [2024-05-15 15:53:17.250886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.254 [2024-05-15 15:53:17.250901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.254 [2024-05-15 15:53:17.250913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.254 [2024-05-15 15:53:17.250941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.254 qpair failed and we were unable to recover it. 00:35:04.254 [2024-05-15 15:53:17.260763] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.254 [2024-05-15 15:53:17.260899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.254 [2024-05-15 15:53:17.260927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.254 [2024-05-15 15:53:17.260945] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.254 [2024-05-15 15:53:17.260958] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.254 [2024-05-15 15:53:17.260987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.254 qpair failed and we were unable to recover it. 00:35:04.254 [2024-05-15 15:53:17.270846] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.254 [2024-05-15 15:53:17.270960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.254 [2024-05-15 15:53:17.270992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.254 [2024-05-15 15:53:17.271007] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.254 [2024-05-15 15:53:17.271019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.254 [2024-05-15 15:53:17.271047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.254 qpair failed and we were unable to recover it. 00:35:04.254 [2024-05-15 15:53:17.280813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.254 [2024-05-15 15:53:17.280922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.254 [2024-05-15 15:53:17.280949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.254 [2024-05-15 15:53:17.280964] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.254 [2024-05-15 15:53:17.280977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.254 [2024-05-15 15:53:17.281005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.254 qpair failed and we were unable to recover it. 00:35:04.254 [2024-05-15 15:53:17.290836] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.254 [2024-05-15 15:53:17.290957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.254 [2024-05-15 15:53:17.290983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.254 [2024-05-15 15:53:17.290998] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.254 [2024-05-15 15:53:17.291010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.254 [2024-05-15 15:53:17.291037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.254 qpair failed and we were unable to recover it. 00:35:04.254 [2024-05-15 15:53:17.300945] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.254 [2024-05-15 15:53:17.301067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.254 [2024-05-15 15:53:17.301093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.254 [2024-05-15 15:53:17.301107] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.254 [2024-05-15 15:53:17.301120] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.254 [2024-05-15 15:53:17.301148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.254 qpair failed and we were unable to recover it. 00:35:04.254 [2024-05-15 15:53:17.310961] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.254 [2024-05-15 15:53:17.311076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.254 [2024-05-15 15:53:17.311103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.254 [2024-05-15 15:53:17.311118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.254 [2024-05-15 15:53:17.311130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.254 [2024-05-15 15:53:17.311163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.254 qpair failed and we were unable to recover it. 00:35:04.514 [2024-05-15 15:53:17.320923] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.514 [2024-05-15 15:53:17.321038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.514 [2024-05-15 15:53:17.321070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.514 [2024-05-15 15:53:17.321085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.514 [2024-05-15 15:53:17.321099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.514 [2024-05-15 15:53:17.321127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.514 qpair failed and we were unable to recover it. 00:35:04.514 [2024-05-15 15:53:17.331090] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.514 [2024-05-15 15:53:17.331237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.514 [2024-05-15 15:53:17.331264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.514 [2024-05-15 15:53:17.331279] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.514 [2024-05-15 15:53:17.331291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.514 [2024-05-15 15:53:17.331319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.514 qpair failed and we were unable to recover it. 00:35:04.514 [2024-05-15 15:53:17.340971] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.514 [2024-05-15 15:53:17.341088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.514 [2024-05-15 15:53:17.341115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.514 [2024-05-15 15:53:17.341129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.514 [2024-05-15 15:53:17.341141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.514 [2024-05-15 15:53:17.341169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.514 qpair failed and we were unable to recover it. 00:35:04.514 [2024-05-15 15:53:17.351031] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.514 [2024-05-15 15:53:17.351143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.514 [2024-05-15 15:53:17.351168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.514 [2024-05-15 15:53:17.351183] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.514 [2024-05-15 15:53:17.351195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.514 [2024-05-15 15:53:17.351229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.514 qpair failed and we were unable to recover it. 00:35:04.514 [2024-05-15 15:53:17.361022] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.514 [2024-05-15 15:53:17.361135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.514 [2024-05-15 15:53:17.361165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.514 [2024-05-15 15:53:17.361181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.514 [2024-05-15 15:53:17.361193] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.514 [2024-05-15 15:53:17.361227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.514 qpair failed and we were unable to recover it. 00:35:04.514 [2024-05-15 15:53:17.371145] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.514 [2024-05-15 15:53:17.371267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.514 [2024-05-15 15:53:17.371293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.514 [2024-05-15 15:53:17.371308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.514 [2024-05-15 15:53:17.371320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.514 [2024-05-15 15:53:17.371348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.514 qpair failed and we were unable to recover it. 00:35:04.514 [2024-05-15 15:53:17.381134] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.514 [2024-05-15 15:53:17.381264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.514 [2024-05-15 15:53:17.381290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.514 [2024-05-15 15:53:17.381305] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.514 [2024-05-15 15:53:17.381317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.514 [2024-05-15 15:53:17.381345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.514 qpair failed and we were unable to recover it. 00:35:04.514 [2024-05-15 15:53:17.391174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.514 [2024-05-15 15:53:17.391342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.514 [2024-05-15 15:53:17.391367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.514 [2024-05-15 15:53:17.391382] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.514 [2024-05-15 15:53:17.391395] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.514 [2024-05-15 15:53:17.391422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.514 qpair failed and we were unable to recover it. 00:35:04.514 [2024-05-15 15:53:17.401173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.514 [2024-05-15 15:53:17.401292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.514 [2024-05-15 15:53:17.401319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.514 [2024-05-15 15:53:17.401334] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.514 [2024-05-15 15:53:17.401346] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.514 [2024-05-15 15:53:17.401379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.514 qpair failed and we were unable to recover it. 00:35:04.514 [2024-05-15 15:53:17.411183] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.514 [2024-05-15 15:53:17.411305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.514 [2024-05-15 15:53:17.411331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.514 [2024-05-15 15:53:17.411346] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.514 [2024-05-15 15:53:17.411358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.514 [2024-05-15 15:53:17.411386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.514 qpair failed and we were unable to recover it. 00:35:04.514 [2024-05-15 15:53:17.421190] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.514 [2024-05-15 15:53:17.421313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.514 [2024-05-15 15:53:17.421340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.514 [2024-05-15 15:53:17.421354] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.514 [2024-05-15 15:53:17.421366] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.514 [2024-05-15 15:53:17.421394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.514 qpair failed and we were unable to recover it. 00:35:04.514 [2024-05-15 15:53:17.431244] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.514 [2024-05-15 15:53:17.431369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.514 [2024-05-15 15:53:17.431395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.515 [2024-05-15 15:53:17.431409] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.515 [2024-05-15 15:53:17.431421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.515 [2024-05-15 15:53:17.431449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.515 qpair failed and we were unable to recover it. 00:35:04.515 [2024-05-15 15:53:17.441300] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.515 [2024-05-15 15:53:17.441417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.515 [2024-05-15 15:53:17.441443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.515 [2024-05-15 15:53:17.441458] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.515 [2024-05-15 15:53:17.441471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.515 [2024-05-15 15:53:17.441498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.515 qpair failed and we were unable to recover it. 00:35:04.515 [2024-05-15 15:53:17.451321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.515 [2024-05-15 15:53:17.451446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.515 [2024-05-15 15:53:17.451477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.515 [2024-05-15 15:53:17.451492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.515 [2024-05-15 15:53:17.451505] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.515 [2024-05-15 15:53:17.451532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.515 qpair failed and we were unable to recover it. 00:35:04.515 [2024-05-15 15:53:17.461353] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.515 [2024-05-15 15:53:17.461478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.515 [2024-05-15 15:53:17.461503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.515 [2024-05-15 15:53:17.461518] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.515 [2024-05-15 15:53:17.461530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.515 [2024-05-15 15:53:17.461557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.515 qpair failed and we were unable to recover it. 00:35:04.515 [2024-05-15 15:53:17.471373] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.515 [2024-05-15 15:53:17.471482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.515 [2024-05-15 15:53:17.471507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.515 [2024-05-15 15:53:17.471522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.515 [2024-05-15 15:53:17.471535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.515 [2024-05-15 15:53:17.471562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.515 qpair failed and we were unable to recover it. 00:35:04.515 [2024-05-15 15:53:17.481423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.515 [2024-05-15 15:53:17.481556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.515 [2024-05-15 15:53:17.481582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.515 [2024-05-15 15:53:17.481596] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.515 [2024-05-15 15:53:17.481609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.515 [2024-05-15 15:53:17.481638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.515 qpair failed and we were unable to recover it. 00:35:04.515 [2024-05-15 15:53:17.491457] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.515 [2024-05-15 15:53:17.491574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.515 [2024-05-15 15:53:17.491600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.515 [2024-05-15 15:53:17.491615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.515 [2024-05-15 15:53:17.491633] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.515 [2024-05-15 15:53:17.491662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.515 qpair failed and we were unable to recover it. 00:35:04.515 [2024-05-15 15:53:17.501521] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.515 [2024-05-15 15:53:17.501688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.515 [2024-05-15 15:53:17.501714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.515 [2024-05-15 15:53:17.501728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.515 [2024-05-15 15:53:17.501740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.515 [2024-05-15 15:53:17.501768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.515 qpair failed and we were unable to recover it. 00:35:04.515 [2024-05-15 15:53:17.511512] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.515 [2024-05-15 15:53:17.511630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.515 [2024-05-15 15:53:17.511656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.515 [2024-05-15 15:53:17.511671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.515 [2024-05-15 15:53:17.511683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.515 [2024-05-15 15:53:17.511711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.515 qpair failed and we were unable to recover it. 00:35:04.515 [2024-05-15 15:53:17.521555] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.515 [2024-05-15 15:53:17.521685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.515 [2024-05-15 15:53:17.521711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.515 [2024-05-15 15:53:17.521726] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.515 [2024-05-15 15:53:17.521738] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.515 [2024-05-15 15:53:17.521766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.515 qpair failed and we were unable to recover it. 00:35:04.515 [2024-05-15 15:53:17.531552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.515 [2024-05-15 15:53:17.531663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.515 [2024-05-15 15:53:17.531689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.515 [2024-05-15 15:53:17.531704] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.515 [2024-05-15 15:53:17.531716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.515 [2024-05-15 15:53:17.531744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.515 qpair failed and we were unable to recover it. 00:35:04.515 [2024-05-15 15:53:17.541589] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.515 [2024-05-15 15:53:17.541716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.515 [2024-05-15 15:53:17.541742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.515 [2024-05-15 15:53:17.541756] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.515 [2024-05-15 15:53:17.541769] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.515 [2024-05-15 15:53:17.541797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.515 qpair failed and we were unable to recover it. 00:35:04.515 [2024-05-15 15:53:17.551665] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.515 [2024-05-15 15:53:17.551779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.515 [2024-05-15 15:53:17.551805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.515 [2024-05-15 15:53:17.551820] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.515 [2024-05-15 15:53:17.551832] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.515 [2024-05-15 15:53:17.551859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.515 qpair failed and we were unable to recover it. 00:35:04.515 [2024-05-15 15:53:17.561685] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.515 [2024-05-15 15:53:17.561847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.515 [2024-05-15 15:53:17.561873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.515 [2024-05-15 15:53:17.561887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.515 [2024-05-15 15:53:17.561900] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.515 [2024-05-15 15:53:17.561928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.515 qpair failed and we were unable to recover it. 00:35:04.515 [2024-05-15 15:53:17.571679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.515 [2024-05-15 15:53:17.571801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.515 [2024-05-15 15:53:17.571827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.516 [2024-05-15 15:53:17.571842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.516 [2024-05-15 15:53:17.571855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.516 [2024-05-15 15:53:17.571883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.516 qpair failed and we were unable to recover it. 00:35:04.516 [2024-05-15 15:53:17.581716] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.516 [2024-05-15 15:53:17.581835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.516 [2024-05-15 15:53:17.581861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.516 [2024-05-15 15:53:17.581876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.516 [2024-05-15 15:53:17.581893] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.516 [2024-05-15 15:53:17.581922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.516 qpair failed and we were unable to recover it. 00:35:04.516 [2024-05-15 15:53:17.591729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.516 [2024-05-15 15:53:17.591874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.516 [2024-05-15 15:53:17.591901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.516 [2024-05-15 15:53:17.591915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.516 [2024-05-15 15:53:17.591927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.516 [2024-05-15 15:53:17.591955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.516 qpair failed and we were unable to recover it. 00:35:04.516 [2024-05-15 15:53:17.601806] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.516 [2024-05-15 15:53:17.601916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.516 [2024-05-15 15:53:17.601942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.516 [2024-05-15 15:53:17.601956] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.516 [2024-05-15 15:53:17.601969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.516 [2024-05-15 15:53:17.601996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.516 qpair failed and we were unable to recover it. 00:35:04.516 [2024-05-15 15:53:17.611826] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.516 [2024-05-15 15:53:17.611949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.516 [2024-05-15 15:53:17.611976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.516 [2024-05-15 15:53:17.611994] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.516 [2024-05-15 15:53:17.612008] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.516 [2024-05-15 15:53:17.612036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.516 qpair failed and we were unable to recover it. 00:35:04.775 [2024-05-15 15:53:17.621856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.775 [2024-05-15 15:53:17.621990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.775 [2024-05-15 15:53:17.622016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.775 [2024-05-15 15:53:17.622031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.775 [2024-05-15 15:53:17.622044] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.775 [2024-05-15 15:53:17.622072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.775 qpair failed and we were unable to recover it. 00:35:04.775 [2024-05-15 15:53:17.631914] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.775 [2024-05-15 15:53:17.632036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.775 [2024-05-15 15:53:17.632062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.775 [2024-05-15 15:53:17.632076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.775 [2024-05-15 15:53:17.632089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.775 [2024-05-15 15:53:17.632116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.775 qpair failed and we were unable to recover it. 00:35:04.775 [2024-05-15 15:53:17.641914] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.775 [2024-05-15 15:53:17.642022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.775 [2024-05-15 15:53:17.642049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.775 [2024-05-15 15:53:17.642063] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.775 [2024-05-15 15:53:17.642076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.775 [2024-05-15 15:53:17.642103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.775 qpair failed and we were unable to recover it. 00:35:04.775 [2024-05-15 15:53:17.651954] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.775 [2024-05-15 15:53:17.652073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.775 [2024-05-15 15:53:17.652099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.775 [2024-05-15 15:53:17.652114] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.775 [2024-05-15 15:53:17.652126] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.775 [2024-05-15 15:53:17.652154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.775 qpair failed and we were unable to recover it. 00:35:04.775 [2024-05-15 15:53:17.662084] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.775 [2024-05-15 15:53:17.662209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.775 [2024-05-15 15:53:17.662241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.776 [2024-05-15 15:53:17.662256] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.776 [2024-05-15 15:53:17.662268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.776 [2024-05-15 15:53:17.662297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.776 qpair failed and we were unable to recover it. 00:35:04.776 [2024-05-15 15:53:17.671990] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.776 [2024-05-15 15:53:17.672103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.776 [2024-05-15 15:53:17.672129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.776 [2024-05-15 15:53:17.672143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.776 [2024-05-15 15:53:17.672161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.776 [2024-05-15 15:53:17.672190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.776 qpair failed and we were unable to recover it. 00:35:04.776 [2024-05-15 15:53:17.682020] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.776 [2024-05-15 15:53:17.682130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.776 [2024-05-15 15:53:17.682155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.776 [2024-05-15 15:53:17.682170] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.776 [2024-05-15 15:53:17.682183] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.776 [2024-05-15 15:53:17.682211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.776 qpair failed and we were unable to recover it. 00:35:04.776 [2024-05-15 15:53:17.692044] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.776 [2024-05-15 15:53:17.692165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.776 [2024-05-15 15:53:17.692191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.776 [2024-05-15 15:53:17.692206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.776 [2024-05-15 15:53:17.692225] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.776 [2024-05-15 15:53:17.692255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.776 qpair failed and we were unable to recover it. 00:35:04.776 [2024-05-15 15:53:17.702182] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.776 [2024-05-15 15:53:17.702310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.776 [2024-05-15 15:53:17.702336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.776 [2024-05-15 15:53:17.702351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.776 [2024-05-15 15:53:17.702363] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.776 [2024-05-15 15:53:17.702391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.776 qpair failed and we were unable to recover it. 00:35:04.776 [2024-05-15 15:53:17.712120] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.776 [2024-05-15 15:53:17.712241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.776 [2024-05-15 15:53:17.712267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.776 [2024-05-15 15:53:17.712282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.776 [2024-05-15 15:53:17.712294] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.776 [2024-05-15 15:53:17.712322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.776 qpair failed and we were unable to recover it. 00:35:04.776 [2024-05-15 15:53:17.722146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.776 [2024-05-15 15:53:17.722270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.776 [2024-05-15 15:53:17.722298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.776 [2024-05-15 15:53:17.722312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.776 [2024-05-15 15:53:17.722324] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.776 [2024-05-15 15:53:17.722353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.776 qpair failed and we were unable to recover it. 00:35:04.776 [2024-05-15 15:53:17.732174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.776 [2024-05-15 15:53:17.732298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.776 [2024-05-15 15:53:17.732325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.776 [2024-05-15 15:53:17.732340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.776 [2024-05-15 15:53:17.732352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.776 [2024-05-15 15:53:17.732380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.776 qpair failed and we were unable to recover it. 00:35:04.776 [2024-05-15 15:53:17.742236] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.776 [2024-05-15 15:53:17.742362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.776 [2024-05-15 15:53:17.742388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.776 [2024-05-15 15:53:17.742403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.776 [2024-05-15 15:53:17.742415] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.776 [2024-05-15 15:53:17.742443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.776 qpair failed and we were unable to recover it. 00:35:04.776 [2024-05-15 15:53:17.752241] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.776 [2024-05-15 15:53:17.752353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.776 [2024-05-15 15:53:17.752379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.776 [2024-05-15 15:53:17.752393] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.776 [2024-05-15 15:53:17.752406] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.776 [2024-05-15 15:53:17.752434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.776 qpair failed and we were unable to recover it. 00:35:04.776 [2024-05-15 15:53:17.762255] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.776 [2024-05-15 15:53:17.762369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.776 [2024-05-15 15:53:17.762395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.776 [2024-05-15 15:53:17.762415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.776 [2024-05-15 15:53:17.762429] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.776 [2024-05-15 15:53:17.762458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.776 qpair failed and we were unable to recover it. 00:35:04.776 [2024-05-15 15:53:17.772336] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.776 [2024-05-15 15:53:17.772450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.776 [2024-05-15 15:53:17.772475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.776 [2024-05-15 15:53:17.772489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.776 [2024-05-15 15:53:17.772502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.776 [2024-05-15 15:53:17.772538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.776 qpair failed and we were unable to recover it. 00:35:04.776 [2024-05-15 15:53:17.782345] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.776 [2024-05-15 15:53:17.782528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.776 [2024-05-15 15:53:17.782554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.776 [2024-05-15 15:53:17.782569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.776 [2024-05-15 15:53:17.782581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.776 [2024-05-15 15:53:17.782609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.776 qpair failed and we were unable to recover it. 00:35:04.776 [2024-05-15 15:53:17.792358] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.776 [2024-05-15 15:53:17.792471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.776 [2024-05-15 15:53:17.792497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.776 [2024-05-15 15:53:17.792511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.776 [2024-05-15 15:53:17.792524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.776 [2024-05-15 15:53:17.792552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.776 qpair failed and we were unable to recover it. 00:35:04.776 [2024-05-15 15:53:17.802446] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.776 [2024-05-15 15:53:17.802601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.777 [2024-05-15 15:53:17.802628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.777 [2024-05-15 15:53:17.802643] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.777 [2024-05-15 15:53:17.802655] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.777 [2024-05-15 15:53:17.802683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.777 qpair failed and we were unable to recover it. 00:35:04.777 [2024-05-15 15:53:17.812434] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.777 [2024-05-15 15:53:17.812567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.777 [2024-05-15 15:53:17.812593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.777 [2024-05-15 15:53:17.812607] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.777 [2024-05-15 15:53:17.812619] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.777 [2024-05-15 15:53:17.812647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.777 qpair failed and we were unable to recover it. 00:35:04.777 [2024-05-15 15:53:17.822449] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.777 [2024-05-15 15:53:17.822591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.777 [2024-05-15 15:53:17.822617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.777 [2024-05-15 15:53:17.822631] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.777 [2024-05-15 15:53:17.822644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.777 [2024-05-15 15:53:17.822671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.777 qpair failed and we were unable to recover it. 00:35:04.777 [2024-05-15 15:53:17.832480] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.777 [2024-05-15 15:53:17.832602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.777 [2024-05-15 15:53:17.832628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.777 [2024-05-15 15:53:17.832643] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.777 [2024-05-15 15:53:17.832655] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.777 [2024-05-15 15:53:17.832683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.777 qpair failed and we were unable to recover it. 00:35:04.777 [2024-05-15 15:53:17.842499] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.777 [2024-05-15 15:53:17.842610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.777 [2024-05-15 15:53:17.842636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.777 [2024-05-15 15:53:17.842651] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.777 [2024-05-15 15:53:17.842663] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.777 [2024-05-15 15:53:17.842691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.777 qpair failed and we were unable to recover it. 00:35:04.777 [2024-05-15 15:53:17.852516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.777 [2024-05-15 15:53:17.852674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.777 [2024-05-15 15:53:17.852700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.777 [2024-05-15 15:53:17.852719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.777 [2024-05-15 15:53:17.852732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.777 [2024-05-15 15:53:17.852760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.777 qpair failed and we were unable to recover it. 00:35:04.777 [2024-05-15 15:53:17.862594] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.777 [2024-05-15 15:53:17.862713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.777 [2024-05-15 15:53:17.862739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.777 [2024-05-15 15:53:17.862754] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.777 [2024-05-15 15:53:17.862766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.777 [2024-05-15 15:53:17.862795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.777 qpair failed and we were unable to recover it. 00:35:04.777 [2024-05-15 15:53:17.872589] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:04.777 [2024-05-15 15:53:17.872708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:04.777 [2024-05-15 15:53:17.872735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:04.777 [2024-05-15 15:53:17.872749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:04.777 [2024-05-15 15:53:17.872762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:04.777 [2024-05-15 15:53:17.872789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:04.777 qpair failed and we were unable to recover it. 00:35:05.036 [2024-05-15 15:53:17.882600] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.036 [2024-05-15 15:53:17.882733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.036 [2024-05-15 15:53:17.882759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.036 [2024-05-15 15:53:17.882774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.036 [2024-05-15 15:53:17.882786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.036 [2024-05-15 15:53:17.882814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.036 qpair failed and we were unable to recover it. 00:35:05.036 [2024-05-15 15:53:17.892626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.036 [2024-05-15 15:53:17.892740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.036 [2024-05-15 15:53:17.892766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.036 [2024-05-15 15:53:17.892781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.036 [2024-05-15 15:53:17.892793] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.036 [2024-05-15 15:53:17.892821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.036 qpair failed and we were unable to recover it. 00:35:05.036 [2024-05-15 15:53:17.902665] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.036 [2024-05-15 15:53:17.902784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.036 [2024-05-15 15:53:17.902810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.036 [2024-05-15 15:53:17.902824] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.036 [2024-05-15 15:53:17.902836] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.036 [2024-05-15 15:53:17.902863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.036 qpair failed and we were unable to recover it. 00:35:05.036 [2024-05-15 15:53:17.912791] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.037 [2024-05-15 15:53:17.912916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.037 [2024-05-15 15:53:17.912942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.037 [2024-05-15 15:53:17.912957] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.037 [2024-05-15 15:53:17.912969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.037 [2024-05-15 15:53:17.912997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.037 qpair failed and we were unable to recover it. 00:35:05.037 [2024-05-15 15:53:17.922807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.037 [2024-05-15 15:53:17.922925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.037 [2024-05-15 15:53:17.922951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.037 [2024-05-15 15:53:17.922965] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.037 [2024-05-15 15:53:17.922978] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.037 [2024-05-15 15:53:17.923005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.037 qpair failed and we were unable to recover it. 00:35:05.037 [2024-05-15 15:53:17.932862] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.037 [2024-05-15 15:53:17.933003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.037 [2024-05-15 15:53:17.933029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.037 [2024-05-15 15:53:17.933043] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.037 [2024-05-15 15:53:17.933055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.037 [2024-05-15 15:53:17.933082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.037 qpair failed and we were unable to recover it. 00:35:05.037 [2024-05-15 15:53:17.942800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.037 [2024-05-15 15:53:17.942928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.037 [2024-05-15 15:53:17.942953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.037 [2024-05-15 15:53:17.942973] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.037 [2024-05-15 15:53:17.942986] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.037 [2024-05-15 15:53:17.943014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.037 qpair failed and we were unable to recover it. 00:35:05.037 [2024-05-15 15:53:17.952960] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.037 [2024-05-15 15:53:17.953088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.037 [2024-05-15 15:53:17.953113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.037 [2024-05-15 15:53:17.953128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.037 [2024-05-15 15:53:17.953140] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.037 [2024-05-15 15:53:17.953168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.037 qpair failed and we were unable to recover it. 00:35:05.037 [2024-05-15 15:53:17.962971] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.037 [2024-05-15 15:53:17.963078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.037 [2024-05-15 15:53:17.963104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.037 [2024-05-15 15:53:17.963118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.037 [2024-05-15 15:53:17.963130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.037 [2024-05-15 15:53:17.963158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.037 qpair failed and we were unable to recover it. 00:35:05.037 [2024-05-15 15:53:17.972961] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.037 [2024-05-15 15:53:17.973078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.037 [2024-05-15 15:53:17.973103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.037 [2024-05-15 15:53:17.973118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.037 [2024-05-15 15:53:17.973131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.037 [2024-05-15 15:53:17.973158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.037 qpair failed and we were unable to recover it. 00:35:05.037 [2024-05-15 15:53:17.982938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.037 [2024-05-15 15:53:17.983058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.037 [2024-05-15 15:53:17.983084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.037 [2024-05-15 15:53:17.983098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.037 [2024-05-15 15:53:17.983111] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.037 [2024-05-15 15:53:17.983138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.037 qpair failed and we were unable to recover it. 00:35:05.037 [2024-05-15 15:53:17.992948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.037 [2024-05-15 15:53:17.993074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.037 [2024-05-15 15:53:17.993100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.037 [2024-05-15 15:53:17.993115] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.037 [2024-05-15 15:53:17.993127] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.037 [2024-05-15 15:53:17.993155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.037 qpair failed and we were unable to recover it. 00:35:05.037 [2024-05-15 15:53:18.002939] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.037 [2024-05-15 15:53:18.003052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.037 [2024-05-15 15:53:18.003079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.037 [2024-05-15 15:53:18.003093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.037 [2024-05-15 15:53:18.003105] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.037 [2024-05-15 15:53:18.003133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.037 qpair failed and we were unable to recover it. 00:35:05.037 [2024-05-15 15:53:18.012986] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.037 [2024-05-15 15:53:18.013100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.037 [2024-05-15 15:53:18.013127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.037 [2024-05-15 15:53:18.013142] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.037 [2024-05-15 15:53:18.013154] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.037 [2024-05-15 15:53:18.013183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.037 qpair failed and we were unable to recover it. 00:35:05.037 [2024-05-15 15:53:18.023006] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.037 [2024-05-15 15:53:18.023123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.037 [2024-05-15 15:53:18.023148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.037 [2024-05-15 15:53:18.023163] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.037 [2024-05-15 15:53:18.023175] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.037 [2024-05-15 15:53:18.023203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.037 qpair failed and we were unable to recover it. 00:35:05.037 [2024-05-15 15:53:18.033133] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.037 [2024-05-15 15:53:18.033252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.037 [2024-05-15 15:53:18.033283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.037 [2024-05-15 15:53:18.033299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.037 [2024-05-15 15:53:18.033311] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.037 [2024-05-15 15:53:18.033338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.037 qpair failed and we were unable to recover it. 00:35:05.037 [2024-05-15 15:53:18.043063] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.037 [2024-05-15 15:53:18.043172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.037 [2024-05-15 15:53:18.043198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.037 [2024-05-15 15:53:18.043212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.037 [2024-05-15 15:53:18.043233] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.037 [2024-05-15 15:53:18.043261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.037 qpair failed and we were unable to recover it. 00:35:05.037 [2024-05-15 15:53:18.053190] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.038 [2024-05-15 15:53:18.053311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.038 [2024-05-15 15:53:18.053337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.038 [2024-05-15 15:53:18.053352] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.038 [2024-05-15 15:53:18.053364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.038 [2024-05-15 15:53:18.053392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.038 qpair failed and we were unable to recover it. 00:35:05.038 [2024-05-15 15:53:18.063154] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.038 [2024-05-15 15:53:18.063273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.038 [2024-05-15 15:53:18.063299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.038 [2024-05-15 15:53:18.063313] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.038 [2024-05-15 15:53:18.063325] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.038 [2024-05-15 15:53:18.063353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.038 qpair failed and we were unable to recover it. 00:35:05.038 [2024-05-15 15:53:18.073253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.038 [2024-05-15 15:53:18.073380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.038 [2024-05-15 15:53:18.073407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.038 [2024-05-15 15:53:18.073422] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.038 [2024-05-15 15:53:18.073434] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.038 [2024-05-15 15:53:18.073462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.038 qpair failed and we were unable to recover it. 00:35:05.038 [2024-05-15 15:53:18.083192] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.038 [2024-05-15 15:53:18.083317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.038 [2024-05-15 15:53:18.083344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.038 [2024-05-15 15:53:18.083359] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.038 [2024-05-15 15:53:18.083372] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.038 [2024-05-15 15:53:18.083400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.038 qpair failed and we were unable to recover it. 00:35:05.038 [2024-05-15 15:53:18.093206] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.038 [2024-05-15 15:53:18.093327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.038 [2024-05-15 15:53:18.093352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.038 [2024-05-15 15:53:18.093367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.038 [2024-05-15 15:53:18.093379] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.038 [2024-05-15 15:53:18.093407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.038 qpair failed and we were unable to recover it. 00:35:05.038 [2024-05-15 15:53:18.103290] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.038 [2024-05-15 15:53:18.103415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.038 [2024-05-15 15:53:18.103441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.038 [2024-05-15 15:53:18.103456] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.038 [2024-05-15 15:53:18.103471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.038 [2024-05-15 15:53:18.103499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.038 qpair failed and we were unable to recover it. 00:35:05.038 [2024-05-15 15:53:18.113266] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.038 [2024-05-15 15:53:18.113382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.038 [2024-05-15 15:53:18.113408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.038 [2024-05-15 15:53:18.113423] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.038 [2024-05-15 15:53:18.113435] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.038 [2024-05-15 15:53:18.113463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.038 qpair failed and we were unable to recover it. 00:35:05.038 [2024-05-15 15:53:18.123285] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.038 [2024-05-15 15:53:18.123404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.038 [2024-05-15 15:53:18.123434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.038 [2024-05-15 15:53:18.123450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.038 [2024-05-15 15:53:18.123462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.038 [2024-05-15 15:53:18.123491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.038 qpair failed and we were unable to recover it. 00:35:05.038 [2024-05-15 15:53:18.133341] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.038 [2024-05-15 15:53:18.133464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.038 [2024-05-15 15:53:18.133492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.038 [2024-05-15 15:53:18.133508] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.038 [2024-05-15 15:53:18.133520] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.038 [2024-05-15 15:53:18.133549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.038 qpair failed and we were unable to recover it. 00:35:05.297 [2024-05-15 15:53:18.143354] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.297 [2024-05-15 15:53:18.143521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.297 [2024-05-15 15:53:18.143547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.297 [2024-05-15 15:53:18.143562] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.297 [2024-05-15 15:53:18.143574] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.297 [2024-05-15 15:53:18.143602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.297 qpair failed and we were unable to recover it. 00:35:05.297 [2024-05-15 15:53:18.153518] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.297 [2024-05-15 15:53:18.153629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.297 [2024-05-15 15:53:18.153656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.297 [2024-05-15 15:53:18.153671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.297 [2024-05-15 15:53:18.153683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.297 [2024-05-15 15:53:18.153711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.297 qpair failed and we were unable to recover it. 00:35:05.297 [2024-05-15 15:53:18.163422] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.298 [2024-05-15 15:53:18.163546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.298 [2024-05-15 15:53:18.163572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.298 [2024-05-15 15:53:18.163587] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.298 [2024-05-15 15:53:18.163600] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.298 [2024-05-15 15:53:18.163633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.298 qpair failed and we were unable to recover it. 00:35:05.298 [2024-05-15 15:53:18.173503] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.298 [2024-05-15 15:53:18.173623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.298 [2024-05-15 15:53:18.173649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.298 [2024-05-15 15:53:18.173664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.298 [2024-05-15 15:53:18.173676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.298 [2024-05-15 15:53:18.173704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.298 qpair failed and we were unable to recover it. 00:35:05.298 [2024-05-15 15:53:18.183477] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.298 [2024-05-15 15:53:18.183592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.298 [2024-05-15 15:53:18.183618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.298 [2024-05-15 15:53:18.183632] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.298 [2024-05-15 15:53:18.183645] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.298 [2024-05-15 15:53:18.183673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.298 qpair failed and we were unable to recover it. 00:35:05.298 [2024-05-15 15:53:18.193495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.298 [2024-05-15 15:53:18.193610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.298 [2024-05-15 15:53:18.193636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.298 [2024-05-15 15:53:18.193650] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.298 [2024-05-15 15:53:18.193662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.298 [2024-05-15 15:53:18.193690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.298 qpair failed and we were unable to recover it. 00:35:05.298 [2024-05-15 15:53:18.203510] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.298 [2024-05-15 15:53:18.203621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.298 [2024-05-15 15:53:18.203647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.298 [2024-05-15 15:53:18.203661] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.298 [2024-05-15 15:53:18.203673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.298 [2024-05-15 15:53:18.203701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.298 qpair failed and we were unable to recover it. 00:35:05.298 [2024-05-15 15:53:18.213545] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.298 [2024-05-15 15:53:18.213706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.298 [2024-05-15 15:53:18.213738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.298 [2024-05-15 15:53:18.213757] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.298 [2024-05-15 15:53:18.213770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.298 [2024-05-15 15:53:18.213798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.298 qpair failed and we were unable to recover it. 00:35:05.298 [2024-05-15 15:53:18.223581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.298 [2024-05-15 15:53:18.223700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.298 [2024-05-15 15:53:18.223726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.298 [2024-05-15 15:53:18.223741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.298 [2024-05-15 15:53:18.223754] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.298 [2024-05-15 15:53:18.223782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.298 qpair failed and we were unable to recover it. 00:35:05.298 [2024-05-15 15:53:18.233612] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.298 [2024-05-15 15:53:18.233744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.298 [2024-05-15 15:53:18.233770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.298 [2024-05-15 15:53:18.233785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.298 [2024-05-15 15:53:18.233797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.298 [2024-05-15 15:53:18.233824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.298 qpair failed and we were unable to recover it. 00:35:05.298 [2024-05-15 15:53:18.243638] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.298 [2024-05-15 15:53:18.243746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.298 [2024-05-15 15:53:18.243780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.298 [2024-05-15 15:53:18.243795] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.298 [2024-05-15 15:53:18.243807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.298 [2024-05-15 15:53:18.243835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.298 qpair failed and we were unable to recover it. 00:35:05.298 [2024-05-15 15:53:18.253686] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.298 [2024-05-15 15:53:18.253797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.298 [2024-05-15 15:53:18.253822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.298 [2024-05-15 15:53:18.253836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.298 [2024-05-15 15:53:18.253850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.298 [2024-05-15 15:53:18.253891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.298 qpair failed and we were unable to recover it. 00:35:05.298 [2024-05-15 15:53:18.263697] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.298 [2024-05-15 15:53:18.263815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.298 [2024-05-15 15:53:18.263840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.298 [2024-05-15 15:53:18.263854] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.298 [2024-05-15 15:53:18.263866] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.298 [2024-05-15 15:53:18.263894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.298 qpair failed and we were unable to recover it. 00:35:05.298 [2024-05-15 15:53:18.273754] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.298 [2024-05-15 15:53:18.273881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.298 [2024-05-15 15:53:18.273916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.298 [2024-05-15 15:53:18.273930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.298 [2024-05-15 15:53:18.273942] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.298 [2024-05-15 15:53:18.273970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.298 qpair failed and we were unable to recover it. 00:35:05.298 [2024-05-15 15:53:18.283771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.298 [2024-05-15 15:53:18.283890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.298 [2024-05-15 15:53:18.283916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.298 [2024-05-15 15:53:18.283931] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.298 [2024-05-15 15:53:18.283944] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.298 [2024-05-15 15:53:18.283971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.298 qpair failed and we were unable to recover it. 00:35:05.298 [2024-05-15 15:53:18.293764] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.298 [2024-05-15 15:53:18.293880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.298 [2024-05-15 15:53:18.293906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.298 [2024-05-15 15:53:18.293921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.298 [2024-05-15 15:53:18.293934] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.298 [2024-05-15 15:53:18.293961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.298 qpair failed and we were unable to recover it. 00:35:05.298 [2024-05-15 15:53:18.303840] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.299 [2024-05-15 15:53:18.304009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.299 [2024-05-15 15:53:18.304040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.299 [2024-05-15 15:53:18.304056] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.299 [2024-05-15 15:53:18.304069] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.299 [2024-05-15 15:53:18.304096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.299 qpair failed and we were unable to recover it. 00:35:05.299 [2024-05-15 15:53:18.313878] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.299 [2024-05-15 15:53:18.314005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.299 [2024-05-15 15:53:18.314031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.299 [2024-05-15 15:53:18.314046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.299 [2024-05-15 15:53:18.314059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.299 [2024-05-15 15:53:18.314086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.299 qpair failed and we were unable to recover it. 00:35:05.299 [2024-05-15 15:53:18.323867] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.299 [2024-05-15 15:53:18.323991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.299 [2024-05-15 15:53:18.324016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.299 [2024-05-15 15:53:18.324031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.299 [2024-05-15 15:53:18.324044] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.299 [2024-05-15 15:53:18.324083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.299 qpair failed and we were unable to recover it. 00:35:05.299 [2024-05-15 15:53:18.333895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.299 [2024-05-15 15:53:18.334017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.299 [2024-05-15 15:53:18.334044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.299 [2024-05-15 15:53:18.334059] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.299 [2024-05-15 15:53:18.334072] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.299 [2024-05-15 15:53:18.334101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.299 qpair failed and we were unable to recover it. 00:35:05.299 [2024-05-15 15:53:18.343945] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.299 [2024-05-15 15:53:18.344079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.299 [2024-05-15 15:53:18.344106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.299 [2024-05-15 15:53:18.344122] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.299 [2024-05-15 15:53:18.344134] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.299 [2024-05-15 15:53:18.344168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.299 qpair failed and we were unable to recover it. 00:35:05.299 [2024-05-15 15:53:18.354054] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.299 [2024-05-15 15:53:18.354173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.299 [2024-05-15 15:53:18.354210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.299 [2024-05-15 15:53:18.354236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.299 [2024-05-15 15:53:18.354249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.299 [2024-05-15 15:53:18.354278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.299 qpair failed and we were unable to recover it. 00:35:05.299 [2024-05-15 15:53:18.364089] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.299 [2024-05-15 15:53:18.364231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.299 [2024-05-15 15:53:18.364260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.299 [2024-05-15 15:53:18.364275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.299 [2024-05-15 15:53:18.364288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.299 [2024-05-15 15:53:18.364316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.299 qpair failed and we were unable to recover it. 00:35:05.299 [2024-05-15 15:53:18.374033] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.299 [2024-05-15 15:53:18.374154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.299 [2024-05-15 15:53:18.374180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.299 [2024-05-15 15:53:18.374196] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.299 [2024-05-15 15:53:18.374208] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.299 [2024-05-15 15:53:18.374247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.299 qpair failed and we were unable to recover it. 00:35:05.299 [2024-05-15 15:53:18.384069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.299 [2024-05-15 15:53:18.384192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.299 [2024-05-15 15:53:18.384224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.299 [2024-05-15 15:53:18.384241] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.299 [2024-05-15 15:53:18.384254] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.299 [2024-05-15 15:53:18.384282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.299 qpair failed and we were unable to recover it. 00:35:05.299 [2024-05-15 15:53:18.394066] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.299 [2024-05-15 15:53:18.394186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.299 [2024-05-15 15:53:18.394224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.299 [2024-05-15 15:53:18.394242] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.299 [2024-05-15 15:53:18.394254] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.299 [2024-05-15 15:53:18.394283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.299 qpair failed and we were unable to recover it. 00:35:05.558 [2024-05-15 15:53:18.404106] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.558 [2024-05-15 15:53:18.404278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.558 [2024-05-15 15:53:18.404305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.558 [2024-05-15 15:53:18.404320] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.558 [2024-05-15 15:53:18.404334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.558 [2024-05-15 15:53:18.404363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.558 qpair failed and we were unable to recover it. 00:35:05.558 [2024-05-15 15:53:18.414115] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.558 [2024-05-15 15:53:18.414239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.558 [2024-05-15 15:53:18.414266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.558 [2024-05-15 15:53:18.414281] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.558 [2024-05-15 15:53:18.414294] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.558 [2024-05-15 15:53:18.414322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.558 qpair failed and we were unable to recover it. 00:35:05.558 [2024-05-15 15:53:18.424198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.558 [2024-05-15 15:53:18.424338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.558 [2024-05-15 15:53:18.424365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.558 [2024-05-15 15:53:18.424381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.558 [2024-05-15 15:53:18.424394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.558 [2024-05-15 15:53:18.424422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.558 qpair failed and we were unable to recover it. 00:35:05.558 [2024-05-15 15:53:18.434298] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.558 [2024-05-15 15:53:18.434429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.558 [2024-05-15 15:53:18.434455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.558 [2024-05-15 15:53:18.434471] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.558 [2024-05-15 15:53:18.434492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.558 [2024-05-15 15:53:18.434522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.558 qpair failed and we were unable to recover it. 00:35:05.558 [2024-05-15 15:53:18.444225] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.558 [2024-05-15 15:53:18.444346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.558 [2024-05-15 15:53:18.444373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.558 [2024-05-15 15:53:18.444388] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.558 [2024-05-15 15:53:18.444401] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.558 [2024-05-15 15:53:18.444429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.558 qpair failed and we were unable to recover it. 00:35:05.558 [2024-05-15 15:53:18.454238] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.558 [2024-05-15 15:53:18.454363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.558 [2024-05-15 15:53:18.454389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.558 [2024-05-15 15:53:18.454405] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.558 [2024-05-15 15:53:18.454417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.558 [2024-05-15 15:53:18.454446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.558 qpair failed and we were unable to recover it. 00:35:05.558 [2024-05-15 15:53:18.464383] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.558 [2024-05-15 15:53:18.464511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.558 [2024-05-15 15:53:18.464536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.558 [2024-05-15 15:53:18.464552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.558 [2024-05-15 15:53:18.464565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.558 [2024-05-15 15:53:18.464592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.558 qpair failed and we were unable to recover it. 00:35:05.558 [2024-05-15 15:53:18.474308] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.558 [2024-05-15 15:53:18.474442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.559 [2024-05-15 15:53:18.474469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.559 [2024-05-15 15:53:18.474485] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.559 [2024-05-15 15:53:18.474498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.559 [2024-05-15 15:53:18.474527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.559 qpair failed and we were unable to recover it. 00:35:05.559 [2024-05-15 15:53:18.484329] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.559 [2024-05-15 15:53:18.484450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.559 [2024-05-15 15:53:18.484477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.559 [2024-05-15 15:53:18.484492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.559 [2024-05-15 15:53:18.484505] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.559 [2024-05-15 15:53:18.484534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.559 qpair failed and we were unable to recover it. 00:35:05.559 [2024-05-15 15:53:18.494368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.559 [2024-05-15 15:53:18.494486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.559 [2024-05-15 15:53:18.494523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.559 [2024-05-15 15:53:18.494538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.559 [2024-05-15 15:53:18.494551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.559 [2024-05-15 15:53:18.494579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.559 qpair failed and we were unable to recover it. 00:35:05.559 [2024-05-15 15:53:18.504416] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.559 [2024-05-15 15:53:18.504548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.559 [2024-05-15 15:53:18.504582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.559 [2024-05-15 15:53:18.504597] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.559 [2024-05-15 15:53:18.504610] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.559 [2024-05-15 15:53:18.504639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.559 qpair failed and we were unable to recover it. 00:35:05.559 [2024-05-15 15:53:18.514419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.559 [2024-05-15 15:53:18.514551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.559 [2024-05-15 15:53:18.514577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.559 [2024-05-15 15:53:18.514592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.559 [2024-05-15 15:53:18.514605] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.559 [2024-05-15 15:53:18.514633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.559 qpair failed and we were unable to recover it. 00:35:05.559 [2024-05-15 15:53:18.524476] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.559 [2024-05-15 15:53:18.524597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.559 [2024-05-15 15:53:18.524621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.559 [2024-05-15 15:53:18.524636] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.559 [2024-05-15 15:53:18.524657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.559 [2024-05-15 15:53:18.524686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.559 qpair failed and we were unable to recover it. 00:35:05.559 [2024-05-15 15:53:18.534634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.559 [2024-05-15 15:53:18.534753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.559 [2024-05-15 15:53:18.534780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.559 [2024-05-15 15:53:18.534796] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.559 [2024-05-15 15:53:18.534808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.559 [2024-05-15 15:53:18.534836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.559 qpair failed and we were unable to recover it. 00:35:05.559 [2024-05-15 15:53:18.544523] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.559 [2024-05-15 15:53:18.544652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.559 [2024-05-15 15:53:18.544679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.559 [2024-05-15 15:53:18.544696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.559 [2024-05-15 15:53:18.544712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.559 [2024-05-15 15:53:18.544741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.559 qpair failed and we were unable to recover it. 00:35:05.559 [2024-05-15 15:53:18.554530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.559 [2024-05-15 15:53:18.554668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.559 [2024-05-15 15:53:18.554695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.559 [2024-05-15 15:53:18.554710] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.559 [2024-05-15 15:53:18.554723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.559 [2024-05-15 15:53:18.554752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.559 qpair failed and we were unable to recover it. 00:35:05.559 [2024-05-15 15:53:18.564571] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.559 [2024-05-15 15:53:18.564702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.559 [2024-05-15 15:53:18.564729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.559 [2024-05-15 15:53:18.564745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.559 [2024-05-15 15:53:18.564757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.559 [2024-05-15 15:53:18.564786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.559 qpair failed and we were unable to recover it. 00:35:05.559 [2024-05-15 15:53:18.574646] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.559 [2024-05-15 15:53:18.574817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.559 [2024-05-15 15:53:18.574844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.559 [2024-05-15 15:53:18.574859] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.559 [2024-05-15 15:53:18.574871] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.559 [2024-05-15 15:53:18.574899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.559 qpair failed and we were unable to recover it. 00:35:05.559 [2024-05-15 15:53:18.584622] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.559 [2024-05-15 15:53:18.584788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.559 [2024-05-15 15:53:18.584814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.559 [2024-05-15 15:53:18.584829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.559 [2024-05-15 15:53:18.584842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.559 [2024-05-15 15:53:18.584870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.559 qpair failed and we were unable to recover it. 00:35:05.559 [2024-05-15 15:53:18.594683] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.559 [2024-05-15 15:53:18.594802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.559 [2024-05-15 15:53:18.594829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.559 [2024-05-15 15:53:18.594844] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.559 [2024-05-15 15:53:18.594857] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.559 [2024-05-15 15:53:18.594885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.559 qpair failed and we were unable to recover it. 00:35:05.559 [2024-05-15 15:53:18.604699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.559 [2024-05-15 15:53:18.604862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.559 [2024-05-15 15:53:18.604898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.559 [2024-05-15 15:53:18.604914] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.559 [2024-05-15 15:53:18.604927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.559 [2024-05-15 15:53:18.604955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.559 qpair failed and we were unable to recover it. 00:35:05.559 [2024-05-15 15:53:18.614748] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.559 [2024-05-15 15:53:18.614871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.560 [2024-05-15 15:53:18.614898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.560 [2024-05-15 15:53:18.614913] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.560 [2024-05-15 15:53:18.614932] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.560 [2024-05-15 15:53:18.614961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.560 qpair failed and we were unable to recover it. 00:35:05.560 [2024-05-15 15:53:18.624775] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.560 [2024-05-15 15:53:18.624921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.560 [2024-05-15 15:53:18.624947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.560 [2024-05-15 15:53:18.624962] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.560 [2024-05-15 15:53:18.624975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.560 [2024-05-15 15:53:18.625003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.560 qpair failed and we were unable to recover it. 00:35:05.560 [2024-05-15 15:53:18.634793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.560 [2024-05-15 15:53:18.634920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.560 [2024-05-15 15:53:18.634947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.560 [2024-05-15 15:53:18.634962] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.560 [2024-05-15 15:53:18.634975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.560 [2024-05-15 15:53:18.635003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.560 qpair failed and we were unable to recover it. 00:35:05.560 [2024-05-15 15:53:18.644817] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.560 [2024-05-15 15:53:18.644984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.560 [2024-05-15 15:53:18.645010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.560 [2024-05-15 15:53:18.645025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.560 [2024-05-15 15:53:18.645038] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.560 [2024-05-15 15:53:18.645066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.560 qpair failed and we were unable to recover it. 00:35:05.560 [2024-05-15 15:53:18.654933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.560 [2024-05-15 15:53:18.655075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.560 [2024-05-15 15:53:18.655101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.560 [2024-05-15 15:53:18.655116] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.560 [2024-05-15 15:53:18.655129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.560 [2024-05-15 15:53:18.655158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.560 qpair failed and we were unable to recover it. 00:35:05.819 [2024-05-15 15:53:18.664837] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.819 [2024-05-15 15:53:18.664968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.819 [2024-05-15 15:53:18.664995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.819 [2024-05-15 15:53:18.665010] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.819 [2024-05-15 15:53:18.665023] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.819 [2024-05-15 15:53:18.665052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.819 qpair failed and we were unable to recover it. 00:35:05.819 [2024-05-15 15:53:18.674855] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.819 [2024-05-15 15:53:18.674976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.819 [2024-05-15 15:53:18.675002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.819 [2024-05-15 15:53:18.675018] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.819 [2024-05-15 15:53:18.675031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.819 [2024-05-15 15:53:18.675059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.819 qpair failed and we were unable to recover it. 00:35:05.819 [2024-05-15 15:53:18.684986] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.819 [2024-05-15 15:53:18.685104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.819 [2024-05-15 15:53:18.685131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.819 [2024-05-15 15:53:18.685146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.819 [2024-05-15 15:53:18.685159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.819 [2024-05-15 15:53:18.685187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.819 qpair failed and we were unable to recover it. 00:35:05.819 [2024-05-15 15:53:18.694941] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.819 [2024-05-15 15:53:18.695077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.819 [2024-05-15 15:53:18.695106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.819 [2024-05-15 15:53:18.695125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.819 [2024-05-15 15:53:18.695138] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.819 [2024-05-15 15:53:18.695167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.819 qpair failed and we were unable to recover it. 00:35:05.819 [2024-05-15 15:53:18.705080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.819 [2024-05-15 15:53:18.705212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.819 [2024-05-15 15:53:18.705245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.819 [2024-05-15 15:53:18.705266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.819 [2024-05-15 15:53:18.705280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.819 [2024-05-15 15:53:18.705309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.819 qpair failed and we were unable to recover it. 00:35:05.819 [2024-05-15 15:53:18.714983] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.819 [2024-05-15 15:53:18.715109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.819 [2024-05-15 15:53:18.715137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.819 [2024-05-15 15:53:18.715152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.819 [2024-05-15 15:53:18.715165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.819 [2024-05-15 15:53:18.715193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.819 qpair failed and we were unable to recover it. 00:35:05.819 [2024-05-15 15:53:18.725017] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.819 [2024-05-15 15:53:18.725139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.819 [2024-05-15 15:53:18.725166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.819 [2024-05-15 15:53:18.725182] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.819 [2024-05-15 15:53:18.725195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.819 [2024-05-15 15:53:18.725229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.819 qpair failed and we were unable to recover it. 00:35:05.819 [2024-05-15 15:53:18.735021] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.819 [2024-05-15 15:53:18.735137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.819 [2024-05-15 15:53:18.735164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.819 [2024-05-15 15:53:18.735179] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.819 [2024-05-15 15:53:18.735192] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.819 [2024-05-15 15:53:18.735227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.819 qpair failed and we were unable to recover it. 00:35:05.819 [2024-05-15 15:53:18.745161] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.819 [2024-05-15 15:53:18.745334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.819 [2024-05-15 15:53:18.745360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.819 [2024-05-15 15:53:18.745376] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.819 [2024-05-15 15:53:18.745388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.819 [2024-05-15 15:53:18.745417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.819 qpair failed and we were unable to recover it. 00:35:05.819 [2024-05-15 15:53:18.755087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.819 [2024-05-15 15:53:18.755219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.819 [2024-05-15 15:53:18.755246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.819 [2024-05-15 15:53:18.755264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.819 [2024-05-15 15:53:18.755277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.819 [2024-05-15 15:53:18.755306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.819 qpair failed and we were unable to recover it. 00:35:05.820 [2024-05-15 15:53:18.765135] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.820 [2024-05-15 15:53:18.765282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.820 [2024-05-15 15:53:18.765309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.820 [2024-05-15 15:53:18.765324] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.820 [2024-05-15 15:53:18.765337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.820 [2024-05-15 15:53:18.765366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.820 qpair failed and we were unable to recover it. 00:35:05.820 [2024-05-15 15:53:18.775142] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.820 [2024-05-15 15:53:18.775268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.820 [2024-05-15 15:53:18.775293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.820 [2024-05-15 15:53:18.775308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.820 [2024-05-15 15:53:18.775320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.820 [2024-05-15 15:53:18.775349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.820 qpair failed and we were unable to recover it. 00:35:05.820 [2024-05-15 15:53:18.785193] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.820 [2024-05-15 15:53:18.785380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.820 [2024-05-15 15:53:18.785408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.820 [2024-05-15 15:53:18.785425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.820 [2024-05-15 15:53:18.785438] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.820 [2024-05-15 15:53:18.785466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.820 qpair failed and we were unable to recover it. 00:35:05.820 [2024-05-15 15:53:18.795238] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.820 [2024-05-15 15:53:18.795361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.820 [2024-05-15 15:53:18.795388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.820 [2024-05-15 15:53:18.795409] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.820 [2024-05-15 15:53:18.795422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.820 [2024-05-15 15:53:18.795450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.820 qpair failed and we were unable to recover it. 00:35:05.820 [2024-05-15 15:53:18.805263] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.820 [2024-05-15 15:53:18.805374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.820 [2024-05-15 15:53:18.805398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.820 [2024-05-15 15:53:18.805413] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.820 [2024-05-15 15:53:18.805426] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.820 [2024-05-15 15:53:18.805454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.820 qpair failed and we were unable to recover it. 00:35:05.820 [2024-05-15 15:53:18.815266] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.820 [2024-05-15 15:53:18.815393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.820 [2024-05-15 15:53:18.815419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.820 [2024-05-15 15:53:18.815435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.820 [2024-05-15 15:53:18.815448] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.820 [2024-05-15 15:53:18.815476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.820 qpair failed and we were unable to recover it. 00:35:05.820 [2024-05-15 15:53:18.825376] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.820 [2024-05-15 15:53:18.825502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.820 [2024-05-15 15:53:18.825527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.820 [2024-05-15 15:53:18.825542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.820 [2024-05-15 15:53:18.825556] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.820 [2024-05-15 15:53:18.825584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.820 qpair failed and we were unable to recover it. 00:35:05.820 [2024-05-15 15:53:18.835366] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.820 [2024-05-15 15:53:18.835482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.820 [2024-05-15 15:53:18.835509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.820 [2024-05-15 15:53:18.835524] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.820 [2024-05-15 15:53:18.835537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.820 [2024-05-15 15:53:18.835566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.820 qpair failed and we were unable to recover it. 00:35:05.820 [2024-05-15 15:53:18.845335] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.820 [2024-05-15 15:53:18.845453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.820 [2024-05-15 15:53:18.845480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.820 [2024-05-15 15:53:18.845496] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.820 [2024-05-15 15:53:18.845509] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.820 [2024-05-15 15:53:18.845537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.820 qpair failed and we were unable to recover it. 00:35:05.820 [2024-05-15 15:53:18.855467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.820 [2024-05-15 15:53:18.855582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.820 [2024-05-15 15:53:18.855610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.820 [2024-05-15 15:53:18.855625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.820 [2024-05-15 15:53:18.855638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.820 [2024-05-15 15:53:18.855667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.820 qpair failed and we were unable to recover it. 00:35:05.820 [2024-05-15 15:53:18.865425] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.820 [2024-05-15 15:53:18.865550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.820 [2024-05-15 15:53:18.865575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.820 [2024-05-15 15:53:18.865590] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.820 [2024-05-15 15:53:18.865603] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.820 [2024-05-15 15:53:18.865631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.820 qpair failed and we were unable to recover it. 00:35:05.821 [2024-05-15 15:53:18.875421] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.821 [2024-05-15 15:53:18.875538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.821 [2024-05-15 15:53:18.875563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.821 [2024-05-15 15:53:18.875578] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.821 [2024-05-15 15:53:18.875591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.821 [2024-05-15 15:53:18.875620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.821 qpair failed and we were unable to recover it. 00:35:05.821 [2024-05-15 15:53:18.885469] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.821 [2024-05-15 15:53:18.885634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.821 [2024-05-15 15:53:18.885659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.821 [2024-05-15 15:53:18.885680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.821 [2024-05-15 15:53:18.885694] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.821 [2024-05-15 15:53:18.885723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.821 qpair failed and we were unable to recover it. 00:35:05.821 [2024-05-15 15:53:18.895493] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.821 [2024-05-15 15:53:18.895606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.821 [2024-05-15 15:53:18.895633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.821 [2024-05-15 15:53:18.895648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.821 [2024-05-15 15:53:18.895661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.821 [2024-05-15 15:53:18.895689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.821 qpair failed and we were unable to recover it. 00:35:05.821 [2024-05-15 15:53:18.905536] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.821 [2024-05-15 15:53:18.905654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.821 [2024-05-15 15:53:18.905681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.821 [2024-05-15 15:53:18.905696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.821 [2024-05-15 15:53:18.905710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.821 [2024-05-15 15:53:18.905738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.821 qpair failed and we were unable to recover it. 00:35:05.821 [2024-05-15 15:53:18.915523] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:05.821 [2024-05-15 15:53:18.915635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:05.821 [2024-05-15 15:53:18.915662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:05.821 [2024-05-15 15:53:18.915677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:05.821 [2024-05-15 15:53:18.915690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:05.821 [2024-05-15 15:53:18.915719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:05.821 qpair failed and we were unable to recover it. 00:35:06.080 [2024-05-15 15:53:18.925580] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.080 [2024-05-15 15:53:18.925696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.080 [2024-05-15 15:53:18.925723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.080 [2024-05-15 15:53:18.925739] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.080 [2024-05-15 15:53:18.925752] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.080 [2024-05-15 15:53:18.925782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.080 qpair failed and we were unable to recover it. 00:35:06.080 [2024-05-15 15:53:18.935585] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.080 [2024-05-15 15:53:18.935734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.080 [2024-05-15 15:53:18.935761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.080 [2024-05-15 15:53:18.935777] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.080 [2024-05-15 15:53:18.935790] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.080 [2024-05-15 15:53:18.935819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.080 qpair failed and we were unable to recover it. 00:35:06.080 [2024-05-15 15:53:18.945654] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.080 [2024-05-15 15:53:18.945775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.080 [2024-05-15 15:53:18.945800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.080 [2024-05-15 15:53:18.945815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.080 [2024-05-15 15:53:18.945828] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.080 [2024-05-15 15:53:18.945856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.080 qpair failed and we were unable to recover it. 00:35:06.080 [2024-05-15 15:53:18.955691] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.080 [2024-05-15 15:53:18.955811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.080 [2024-05-15 15:53:18.955837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.080 [2024-05-15 15:53:18.955852] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.080 [2024-05-15 15:53:18.955865] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.080 [2024-05-15 15:53:18.955893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.080 qpair failed and we were unable to recover it. 00:35:06.080 [2024-05-15 15:53:18.965703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.080 [2024-05-15 15:53:18.965825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.080 [2024-05-15 15:53:18.965850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.080 [2024-05-15 15:53:18.965866] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.080 [2024-05-15 15:53:18.965879] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.080 [2024-05-15 15:53:18.965907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.080 qpair failed and we were unable to recover it. 00:35:06.080 [2024-05-15 15:53:18.975746] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.080 [2024-05-15 15:53:18.975864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.080 [2024-05-15 15:53:18.975894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.080 [2024-05-15 15:53:18.975911] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.080 [2024-05-15 15:53:18.975924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.080 [2024-05-15 15:53:18.975952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.080 qpair failed and we were unable to recover it. 00:35:06.080 [2024-05-15 15:53:18.985768] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.081 [2024-05-15 15:53:18.985932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.081 [2024-05-15 15:53:18.985957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.081 [2024-05-15 15:53:18.985972] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.081 [2024-05-15 15:53:18.985985] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.081 [2024-05-15 15:53:18.986014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.081 qpair failed and we were unable to recover it. 00:35:06.081 [2024-05-15 15:53:18.995756] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.081 [2024-05-15 15:53:18.995873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.081 [2024-05-15 15:53:18.995898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.081 [2024-05-15 15:53:18.995913] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.081 [2024-05-15 15:53:18.995926] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.081 [2024-05-15 15:53:18.995955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.081 qpair failed and we were unable to recover it. 00:35:06.081 [2024-05-15 15:53:19.005784] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.081 [2024-05-15 15:53:19.005899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.081 [2024-05-15 15:53:19.005925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.081 [2024-05-15 15:53:19.005940] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.081 [2024-05-15 15:53:19.005954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.081 [2024-05-15 15:53:19.005982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.081 qpair failed and we were unable to recover it. 00:35:06.081 [2024-05-15 15:53:19.015815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.081 [2024-05-15 15:53:19.015932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.081 [2024-05-15 15:53:19.015958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.081 [2024-05-15 15:53:19.015973] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.081 [2024-05-15 15:53:19.015986] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.081 [2024-05-15 15:53:19.016014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.081 qpair failed and we were unable to recover it. 00:35:06.081 [2024-05-15 15:53:19.025863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.081 [2024-05-15 15:53:19.025983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.081 [2024-05-15 15:53:19.026009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.081 [2024-05-15 15:53:19.026024] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.081 [2024-05-15 15:53:19.026037] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.081 [2024-05-15 15:53:19.026065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.081 qpair failed and we were unable to recover it. 00:35:06.081 [2024-05-15 15:53:19.035901] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.081 [2024-05-15 15:53:19.036022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.081 [2024-05-15 15:53:19.036049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.081 [2024-05-15 15:53:19.036065] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.081 [2024-05-15 15:53:19.036081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.081 [2024-05-15 15:53:19.036110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.081 qpair failed and we were unable to recover it. 00:35:06.081 [2024-05-15 15:53:19.045918] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.081 [2024-05-15 15:53:19.046027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.081 [2024-05-15 15:53:19.046053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.081 [2024-05-15 15:53:19.046068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.081 [2024-05-15 15:53:19.046082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.081 [2024-05-15 15:53:19.046111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.081 qpair failed and we were unable to recover it. 00:35:06.081 [2024-05-15 15:53:19.055937] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.081 [2024-05-15 15:53:19.056069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.081 [2024-05-15 15:53:19.056095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.081 [2024-05-15 15:53:19.056110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.081 [2024-05-15 15:53:19.056123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.081 [2024-05-15 15:53:19.056151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.081 qpair failed and we were unable to recover it. 00:35:06.081 [2024-05-15 15:53:19.065976] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.081 [2024-05-15 15:53:19.066099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.081 [2024-05-15 15:53:19.066130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.081 [2024-05-15 15:53:19.066146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.081 [2024-05-15 15:53:19.066159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.081 [2024-05-15 15:53:19.066187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.081 qpair failed and we were unable to recover it. 00:35:06.081 [2024-05-15 15:53:19.076035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.081 [2024-05-15 15:53:19.076204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.081 [2024-05-15 15:53:19.076237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.081 [2024-05-15 15:53:19.076253] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.081 [2024-05-15 15:53:19.076266] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.081 [2024-05-15 15:53:19.076296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.081 qpair failed and we were unable to recover it. 00:35:06.081 [2024-05-15 15:53:19.086104] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.081 [2024-05-15 15:53:19.086228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.081 [2024-05-15 15:53:19.086255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.081 [2024-05-15 15:53:19.086273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.081 [2024-05-15 15:53:19.086286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.081 [2024-05-15 15:53:19.086316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.081 qpair failed and we were unable to recover it. 00:35:06.081 [2024-05-15 15:53:19.096041] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.081 [2024-05-15 15:53:19.096170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.081 [2024-05-15 15:53:19.096195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.081 [2024-05-15 15:53:19.096211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.081 [2024-05-15 15:53:19.096231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.081 [2024-05-15 15:53:19.096261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.081 qpair failed and we were unable to recover it. 00:35:06.081 [2024-05-15 15:53:19.106074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.081 [2024-05-15 15:53:19.106208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.081 [2024-05-15 15:53:19.106240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.081 [2024-05-15 15:53:19.106255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.081 [2024-05-15 15:53:19.106269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.081 [2024-05-15 15:53:19.106302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.081 qpair failed and we were unable to recover it. 00:35:06.081 [2024-05-15 15:53:19.116106] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.081 [2024-05-15 15:53:19.116232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.081 [2024-05-15 15:53:19.116258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.081 [2024-05-15 15:53:19.116273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.081 [2024-05-15 15:53:19.116289] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.082 [2024-05-15 15:53:19.116318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.082 qpair failed and we were unable to recover it. 00:35:06.082 [2024-05-15 15:53:19.126132] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.082 [2024-05-15 15:53:19.126254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.082 [2024-05-15 15:53:19.126279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.082 [2024-05-15 15:53:19.126293] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.082 [2024-05-15 15:53:19.126306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.082 [2024-05-15 15:53:19.126336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.082 qpair failed and we were unable to recover it. 00:35:06.082 [2024-05-15 15:53:19.136155] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.082 [2024-05-15 15:53:19.136349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.082 [2024-05-15 15:53:19.136376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.082 [2024-05-15 15:53:19.136391] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.082 [2024-05-15 15:53:19.136404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.082 [2024-05-15 15:53:19.136434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.082 qpair failed and we were unable to recover it. 00:35:06.082 [2024-05-15 15:53:19.146225] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.082 [2024-05-15 15:53:19.146359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.082 [2024-05-15 15:53:19.146386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.082 [2024-05-15 15:53:19.146402] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.082 [2024-05-15 15:53:19.146415] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.082 [2024-05-15 15:53:19.146445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.082 qpair failed and we were unable to recover it. 00:35:06.082 [2024-05-15 15:53:19.156273] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.082 [2024-05-15 15:53:19.156397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.082 [2024-05-15 15:53:19.156428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.082 [2024-05-15 15:53:19.156444] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.082 [2024-05-15 15:53:19.156457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.082 [2024-05-15 15:53:19.156486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.082 qpair failed and we were unable to recover it. 00:35:06.082 [2024-05-15 15:53:19.166270] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.082 [2024-05-15 15:53:19.166394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.082 [2024-05-15 15:53:19.166420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.082 [2024-05-15 15:53:19.166434] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.082 [2024-05-15 15:53:19.166448] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.082 [2024-05-15 15:53:19.166477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.082 qpair failed and we were unable to recover it. 00:35:06.082 [2024-05-15 15:53:19.176254] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.082 [2024-05-15 15:53:19.176370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.082 [2024-05-15 15:53:19.176396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.082 [2024-05-15 15:53:19.176411] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.082 [2024-05-15 15:53:19.176424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.082 [2024-05-15 15:53:19.176453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.082 qpair failed and we were unable to recover it. 00:35:06.341 [2024-05-15 15:53:19.186312] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.341 [2024-05-15 15:53:19.186467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.341 [2024-05-15 15:53:19.186494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.341 [2024-05-15 15:53:19.186509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.341 [2024-05-15 15:53:19.186522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.341 [2024-05-15 15:53:19.186551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.341 qpair failed and we were unable to recover it. 00:35:06.341 [2024-05-15 15:53:19.196395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.341 [2024-05-15 15:53:19.196566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.341 [2024-05-15 15:53:19.196592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.341 [2024-05-15 15:53:19.196606] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.341 [2024-05-15 15:53:19.196620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.341 [2024-05-15 15:53:19.196655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.341 qpair failed and we were unable to recover it. 00:35:06.341 [2024-05-15 15:53:19.206347] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.341 [2024-05-15 15:53:19.206461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.341 [2024-05-15 15:53:19.206486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.341 [2024-05-15 15:53:19.206501] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.341 [2024-05-15 15:53:19.206514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.341 [2024-05-15 15:53:19.206543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.341 qpair failed and we were unable to recover it. 00:35:06.341 [2024-05-15 15:53:19.216466] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.341 [2024-05-15 15:53:19.216573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.341 [2024-05-15 15:53:19.216599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.341 [2024-05-15 15:53:19.216613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.341 [2024-05-15 15:53:19.216627] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.341 [2024-05-15 15:53:19.216655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.341 qpair failed and we were unable to recover it. 00:35:06.341 [2024-05-15 15:53:19.226427] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.341 [2024-05-15 15:53:19.226557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.341 [2024-05-15 15:53:19.226582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.341 [2024-05-15 15:53:19.226597] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.341 [2024-05-15 15:53:19.226610] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.341 [2024-05-15 15:53:19.226639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.341 qpair failed and we were unable to recover it. 00:35:06.341 [2024-05-15 15:53:19.236424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.341 [2024-05-15 15:53:19.236541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.341 [2024-05-15 15:53:19.236566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.341 [2024-05-15 15:53:19.236581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.341 [2024-05-15 15:53:19.236594] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.341 [2024-05-15 15:53:19.236622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.341 qpair failed and we were unable to recover it. 00:35:06.341 [2024-05-15 15:53:19.246596] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.341 [2024-05-15 15:53:19.246729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.341 [2024-05-15 15:53:19.246759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.341 [2024-05-15 15:53:19.246775] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.341 [2024-05-15 15:53:19.246788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.341 [2024-05-15 15:53:19.246816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.341 qpair failed and we were unable to recover it. 00:35:06.341 [2024-05-15 15:53:19.256479] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.341 [2024-05-15 15:53:19.256588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.341 [2024-05-15 15:53:19.256613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.341 [2024-05-15 15:53:19.256627] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.341 [2024-05-15 15:53:19.256640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.341 [2024-05-15 15:53:19.256668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.341 qpair failed and we were unable to recover it. 00:35:06.341 [2024-05-15 15:53:19.266572] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.341 [2024-05-15 15:53:19.266692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.341 [2024-05-15 15:53:19.266717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.341 [2024-05-15 15:53:19.266732] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.341 [2024-05-15 15:53:19.266745] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.341 [2024-05-15 15:53:19.266773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.341 qpair failed and we were unable to recover it. 00:35:06.341 [2024-05-15 15:53:19.276571] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.341 [2024-05-15 15:53:19.276685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.341 [2024-05-15 15:53:19.276710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.341 [2024-05-15 15:53:19.276725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.341 [2024-05-15 15:53:19.276739] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.341 [2024-05-15 15:53:19.276767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.341 qpair failed and we were unable to recover it. 00:35:06.341 [2024-05-15 15:53:19.286667] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.341 [2024-05-15 15:53:19.286775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.341 [2024-05-15 15:53:19.286800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.341 [2024-05-15 15:53:19.286815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.341 [2024-05-15 15:53:19.286828] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.341 [2024-05-15 15:53:19.286864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.341 qpair failed and we were unable to recover it. 00:35:06.341 [2024-05-15 15:53:19.296596] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.341 [2024-05-15 15:53:19.296708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.342 [2024-05-15 15:53:19.296734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.342 [2024-05-15 15:53:19.296749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.342 [2024-05-15 15:53:19.296762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.342 [2024-05-15 15:53:19.296791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.342 qpair failed and we were unable to recover it. 00:35:06.342 [2024-05-15 15:53:19.306734] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.342 [2024-05-15 15:53:19.306886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.342 [2024-05-15 15:53:19.306911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.342 [2024-05-15 15:53:19.306926] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.342 [2024-05-15 15:53:19.306939] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.342 [2024-05-15 15:53:19.306967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.342 qpair failed and we were unable to recover it. 00:35:06.342 [2024-05-15 15:53:19.316703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.342 [2024-05-15 15:53:19.316812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.342 [2024-05-15 15:53:19.316836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.342 [2024-05-15 15:53:19.316851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.342 [2024-05-15 15:53:19.316864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.342 [2024-05-15 15:53:19.316892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.342 qpair failed and we were unable to recover it. 00:35:06.342 [2024-05-15 15:53:19.326746] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.342 [2024-05-15 15:53:19.326867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.342 [2024-05-15 15:53:19.326894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.342 [2024-05-15 15:53:19.326909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.342 [2024-05-15 15:53:19.326923] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.342 [2024-05-15 15:53:19.326951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.342 qpair failed and we were unable to recover it. 00:35:06.342 [2024-05-15 15:53:19.336788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.342 [2024-05-15 15:53:19.336918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.342 [2024-05-15 15:53:19.336949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.342 [2024-05-15 15:53:19.336965] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.342 [2024-05-15 15:53:19.336978] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.342 [2024-05-15 15:53:19.337007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.342 qpair failed and we were unable to recover it. 00:35:06.342 [2024-05-15 15:53:19.346808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.342 [2024-05-15 15:53:19.346973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.342 [2024-05-15 15:53:19.347000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.342 [2024-05-15 15:53:19.347015] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.342 [2024-05-15 15:53:19.347029] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.342 [2024-05-15 15:53:19.347057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.342 qpair failed and we were unable to recover it. 00:35:06.342 [2024-05-15 15:53:19.356783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.342 [2024-05-15 15:53:19.356902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.342 [2024-05-15 15:53:19.356928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.342 [2024-05-15 15:53:19.356944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.342 [2024-05-15 15:53:19.356957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.342 [2024-05-15 15:53:19.356985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.342 qpair failed and we were unable to recover it. 00:35:06.342 [2024-05-15 15:53:19.366875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.342 [2024-05-15 15:53:19.367055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.342 [2024-05-15 15:53:19.367082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.342 [2024-05-15 15:53:19.367098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.342 [2024-05-15 15:53:19.367111] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.342 [2024-05-15 15:53:19.367139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.342 qpair failed and we were unable to recover it. 00:35:06.342 [2024-05-15 15:53:19.376825] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.342 [2024-05-15 15:53:19.376961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.342 [2024-05-15 15:53:19.376988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.342 [2024-05-15 15:53:19.377003] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.342 [2024-05-15 15:53:19.377022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.342 [2024-05-15 15:53:19.377051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.342 qpair failed and we were unable to recover it. 00:35:06.342 [2024-05-15 15:53:19.386889] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.342 [2024-05-15 15:53:19.387007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.342 [2024-05-15 15:53:19.387032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.342 [2024-05-15 15:53:19.387048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.342 [2024-05-15 15:53:19.387061] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.342 [2024-05-15 15:53:19.387089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.342 qpair failed and we were unable to recover it. 00:35:06.342 [2024-05-15 15:53:19.396937] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.342 [2024-05-15 15:53:19.397052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.342 [2024-05-15 15:53:19.397079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.342 [2024-05-15 15:53:19.397094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.342 [2024-05-15 15:53:19.397107] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.342 [2024-05-15 15:53:19.397135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.342 qpair failed and we were unable to recover it. 00:35:06.342 [2024-05-15 15:53:19.407005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.342 [2024-05-15 15:53:19.407114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.342 [2024-05-15 15:53:19.407140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.342 [2024-05-15 15:53:19.407154] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.342 [2024-05-15 15:53:19.407167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.342 [2024-05-15 15:53:19.407196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.342 qpair failed and we were unable to recover it. 00:35:06.342 [2024-05-15 15:53:19.416938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.342 [2024-05-15 15:53:19.417051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.342 [2024-05-15 15:53:19.417077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.342 [2024-05-15 15:53:19.417092] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.342 [2024-05-15 15:53:19.417104] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.342 [2024-05-15 15:53:19.417132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.342 qpair failed and we were unable to recover it. 00:35:06.342 [2024-05-15 15:53:19.427009] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.342 [2024-05-15 15:53:19.427154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.342 [2024-05-15 15:53:19.427183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.342 [2024-05-15 15:53:19.427198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.342 [2024-05-15 15:53:19.427223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.342 [2024-05-15 15:53:19.427256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.342 qpair failed and we were unable to recover it. 00:35:06.342 [2024-05-15 15:53:19.437002] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.342 [2024-05-15 15:53:19.437123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.343 [2024-05-15 15:53:19.437150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.343 [2024-05-15 15:53:19.437165] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.343 [2024-05-15 15:53:19.437178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.343 [2024-05-15 15:53:19.437206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.343 qpair failed and we were unable to recover it. 00:35:06.601 [2024-05-15 15:53:19.447023] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.601 [2024-05-15 15:53:19.447133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.602 [2024-05-15 15:53:19.447161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.602 [2024-05-15 15:53:19.447176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.602 [2024-05-15 15:53:19.447189] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.602 [2024-05-15 15:53:19.447224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.602 qpair failed and we were unable to recover it. 00:35:06.602 [2024-05-15 15:53:19.457067] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.602 [2024-05-15 15:53:19.457195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.602 [2024-05-15 15:53:19.457231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.602 [2024-05-15 15:53:19.457248] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.602 [2024-05-15 15:53:19.457261] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.602 [2024-05-15 15:53:19.457290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.602 qpair failed and we were unable to recover it. 00:35:06.602 [2024-05-15 15:53:19.467102] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.602 [2024-05-15 15:53:19.467230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.602 [2024-05-15 15:53:19.467257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.602 [2024-05-15 15:53:19.467272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.602 [2024-05-15 15:53:19.467291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.602 [2024-05-15 15:53:19.467320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.602 qpair failed and we were unable to recover it. 00:35:06.602 [2024-05-15 15:53:19.477223] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.602 [2024-05-15 15:53:19.477343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.602 [2024-05-15 15:53:19.477368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.602 [2024-05-15 15:53:19.477383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.602 [2024-05-15 15:53:19.477396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.602 [2024-05-15 15:53:19.477424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.602 qpair failed and we were unable to recover it. 00:35:06.602 [2024-05-15 15:53:19.487178] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.602 [2024-05-15 15:53:19.487341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.602 [2024-05-15 15:53:19.487368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.602 [2024-05-15 15:53:19.487383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.602 [2024-05-15 15:53:19.487396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.602 [2024-05-15 15:53:19.487424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.602 qpair failed and we were unable to recover it. 00:35:06.602 [2024-05-15 15:53:19.497279] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.602 [2024-05-15 15:53:19.497396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.602 [2024-05-15 15:53:19.497423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.602 [2024-05-15 15:53:19.497438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.602 [2024-05-15 15:53:19.497452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.602 [2024-05-15 15:53:19.497481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.602 qpair failed and we were unable to recover it. 00:35:06.602 [2024-05-15 15:53:19.507227] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.602 [2024-05-15 15:53:19.507343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.602 [2024-05-15 15:53:19.507370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.602 [2024-05-15 15:53:19.507385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.602 [2024-05-15 15:53:19.507397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.602 [2024-05-15 15:53:19.507426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.602 qpair failed and we were unable to recover it. 00:35:06.602 [2024-05-15 15:53:19.517329] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.602 [2024-05-15 15:53:19.517453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.602 [2024-05-15 15:53:19.517481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.602 [2024-05-15 15:53:19.517498] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.602 [2024-05-15 15:53:19.517514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.602 [2024-05-15 15:53:19.517543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.602 qpair failed and we were unable to recover it. 00:35:06.602 [2024-05-15 15:53:19.527374] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.602 [2024-05-15 15:53:19.527492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.602 [2024-05-15 15:53:19.527517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.602 [2024-05-15 15:53:19.527532] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.602 [2024-05-15 15:53:19.527545] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.602 [2024-05-15 15:53:19.527574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.602 qpair failed and we were unable to recover it. 00:35:06.602 [2024-05-15 15:53:19.537293] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.602 [2024-05-15 15:53:19.537419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.602 [2024-05-15 15:53:19.537444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.602 [2024-05-15 15:53:19.537458] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.602 [2024-05-15 15:53:19.537471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.602 [2024-05-15 15:53:19.537500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.602 qpair failed and we were unable to recover it. 00:35:06.602 [2024-05-15 15:53:19.547383] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.602 [2024-05-15 15:53:19.547505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.602 [2024-05-15 15:53:19.547529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.602 [2024-05-15 15:53:19.547544] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.602 [2024-05-15 15:53:19.547557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.602 [2024-05-15 15:53:19.547586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.602 qpair failed and we were unable to recover it. 00:35:06.602 [2024-05-15 15:53:19.557355] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.602 [2024-05-15 15:53:19.557467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.602 [2024-05-15 15:53:19.557493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.602 [2024-05-15 15:53:19.557508] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.602 [2024-05-15 15:53:19.557527] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.602 [2024-05-15 15:53:19.557556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.602 qpair failed and we were unable to recover it. 00:35:06.602 [2024-05-15 15:53:19.567383] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.602 [2024-05-15 15:53:19.567511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.602 [2024-05-15 15:53:19.567537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.602 [2024-05-15 15:53:19.567553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.602 [2024-05-15 15:53:19.567566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.602 [2024-05-15 15:53:19.567596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.602 qpair failed and we were unable to recover it. 00:35:06.602 [2024-05-15 15:53:19.577552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.602 [2024-05-15 15:53:19.577662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.602 [2024-05-15 15:53:19.577688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.602 [2024-05-15 15:53:19.577703] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.602 [2024-05-15 15:53:19.577715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.602 [2024-05-15 15:53:19.577744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.602 qpair failed and we were unable to recover it. 00:35:06.602 [2024-05-15 15:53:19.587476] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.603 [2024-05-15 15:53:19.587608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.603 [2024-05-15 15:53:19.587633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.603 [2024-05-15 15:53:19.587648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.603 [2024-05-15 15:53:19.587661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.603 [2024-05-15 15:53:19.587689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.603 qpair failed and we were unable to recover it. 00:35:06.603 [2024-05-15 15:53:19.597477] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.603 [2024-05-15 15:53:19.597607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.603 [2024-05-15 15:53:19.597633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.603 [2024-05-15 15:53:19.597648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.603 [2024-05-15 15:53:19.597660] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.603 [2024-05-15 15:53:19.597689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.603 qpair failed and we were unable to recover it. 00:35:06.603 [2024-05-15 15:53:19.607536] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.603 [2024-05-15 15:53:19.607706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.603 [2024-05-15 15:53:19.607732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.603 [2024-05-15 15:53:19.607746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.603 [2024-05-15 15:53:19.607760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.603 [2024-05-15 15:53:19.607789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.603 qpair failed and we were unable to recover it. 00:35:06.603 [2024-05-15 15:53:19.617504] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.603 [2024-05-15 15:53:19.617644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.603 [2024-05-15 15:53:19.617669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.603 [2024-05-15 15:53:19.617684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.603 [2024-05-15 15:53:19.617697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.603 [2024-05-15 15:53:19.617725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.603 qpair failed and we were unable to recover it. 00:35:06.603 [2024-05-15 15:53:19.627553] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.603 [2024-05-15 15:53:19.627666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.603 [2024-05-15 15:53:19.627690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.603 [2024-05-15 15:53:19.627705] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.603 [2024-05-15 15:53:19.627719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.603 [2024-05-15 15:53:19.627747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.603 qpair failed and we were unable to recover it. 00:35:06.603 [2024-05-15 15:53:19.637594] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.603 [2024-05-15 15:53:19.637728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.603 [2024-05-15 15:53:19.637756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.603 [2024-05-15 15:53:19.637771] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.603 [2024-05-15 15:53:19.637784] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.603 [2024-05-15 15:53:19.637813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.603 qpair failed and we were unable to recover it. 00:35:06.603 [2024-05-15 15:53:19.647600] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.603 [2024-05-15 15:53:19.647716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.603 [2024-05-15 15:53:19.647741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.603 [2024-05-15 15:53:19.647761] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.603 [2024-05-15 15:53:19.647776] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.603 [2024-05-15 15:53:19.647804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.603 qpair failed and we were unable to recover it. 00:35:06.603 [2024-05-15 15:53:19.657635] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.603 [2024-05-15 15:53:19.657773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.603 [2024-05-15 15:53:19.657799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.603 [2024-05-15 15:53:19.657814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.603 [2024-05-15 15:53:19.657827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.603 [2024-05-15 15:53:19.657855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.603 qpair failed and we were unable to recover it. 00:35:06.603 [2024-05-15 15:53:19.667658] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.603 [2024-05-15 15:53:19.667778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.603 [2024-05-15 15:53:19.667802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.603 [2024-05-15 15:53:19.667817] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.603 [2024-05-15 15:53:19.667830] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.603 [2024-05-15 15:53:19.667859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.603 qpair failed and we were unable to recover it. 00:35:06.603 [2024-05-15 15:53:19.677746] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.603 [2024-05-15 15:53:19.677864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.603 [2024-05-15 15:53:19.677889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.603 [2024-05-15 15:53:19.677904] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.603 [2024-05-15 15:53:19.677917] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.603 [2024-05-15 15:53:19.677946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.603 qpair failed and we were unable to recover it. 00:35:06.603 [2024-05-15 15:53:19.687787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.603 [2024-05-15 15:53:19.687917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.603 [2024-05-15 15:53:19.687942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.603 [2024-05-15 15:53:19.687958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.603 [2024-05-15 15:53:19.687971] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.603 [2024-05-15 15:53:19.687999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.603 qpair failed and we were unable to recover it. 00:35:06.603 [2024-05-15 15:53:19.697737] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.603 [2024-05-15 15:53:19.697850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.603 [2024-05-15 15:53:19.697876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.603 [2024-05-15 15:53:19.697891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.603 [2024-05-15 15:53:19.697904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.603 [2024-05-15 15:53:19.697932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.603 qpair failed and we were unable to recover it. 00:35:06.862 [2024-05-15 15:53:19.707775] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.862 [2024-05-15 15:53:19.707892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.862 [2024-05-15 15:53:19.707919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.862 [2024-05-15 15:53:19.707933] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.862 [2024-05-15 15:53:19.707947] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.862 [2024-05-15 15:53:19.707975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.862 qpair failed and we were unable to recover it. 00:35:06.862 [2024-05-15 15:53:19.717894] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.862 [2024-05-15 15:53:19.718011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.862 [2024-05-15 15:53:19.718037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.862 [2024-05-15 15:53:19.718052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.862 [2024-05-15 15:53:19.718065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.862 [2024-05-15 15:53:19.718093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.862 qpair failed and we were unable to recover it. 00:35:06.862 [2024-05-15 15:53:19.727835] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.862 [2024-05-15 15:53:19.727950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.862 [2024-05-15 15:53:19.727976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.862 [2024-05-15 15:53:19.727991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.862 [2024-05-15 15:53:19.728004] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.862 [2024-05-15 15:53:19.728032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.862 qpair failed and we were unable to recover it. 00:35:06.862 [2024-05-15 15:53:19.737880] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.862 [2024-05-15 15:53:19.737988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.862 [2024-05-15 15:53:19.738013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.862 [2024-05-15 15:53:19.738033] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.862 [2024-05-15 15:53:19.738047] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.862 [2024-05-15 15:53:19.738075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.862 qpair failed and we were unable to recover it. 00:35:06.862 [2024-05-15 15:53:19.747912] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.862 [2024-05-15 15:53:19.748035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.862 [2024-05-15 15:53:19.748061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.862 [2024-05-15 15:53:19.748076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.862 [2024-05-15 15:53:19.748089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.862 [2024-05-15 15:53:19.748117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.862 qpair failed and we were unable to recover it. 00:35:06.862 [2024-05-15 15:53:19.757962] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.862 [2024-05-15 15:53:19.758091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.862 [2024-05-15 15:53:19.758116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.862 [2024-05-15 15:53:19.758132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.863 [2024-05-15 15:53:19.758145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.863 [2024-05-15 15:53:19.758173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.863 qpair failed and we were unable to recover it. 00:35:06.863 [2024-05-15 15:53:19.767962] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.863 [2024-05-15 15:53:19.768079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.863 [2024-05-15 15:53:19.768104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.863 [2024-05-15 15:53:19.768119] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.863 [2024-05-15 15:53:19.768133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.863 [2024-05-15 15:53:19.768161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.863 qpair failed and we were unable to recover it. 00:35:06.863 [2024-05-15 15:53:19.778007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.863 [2024-05-15 15:53:19.778163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.863 [2024-05-15 15:53:19.778188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.863 [2024-05-15 15:53:19.778203] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.863 [2024-05-15 15:53:19.778222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.863 [2024-05-15 15:53:19.778252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.863 qpair failed and we were unable to recover it. 00:35:06.863 [2024-05-15 15:53:19.788031] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.863 [2024-05-15 15:53:19.788157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.863 [2024-05-15 15:53:19.788181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.863 [2024-05-15 15:53:19.788196] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.863 [2024-05-15 15:53:19.788209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.863 [2024-05-15 15:53:19.788244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.863 qpair failed and we were unable to recover it. 00:35:06.863 [2024-05-15 15:53:19.798059] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.863 [2024-05-15 15:53:19.798172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.863 [2024-05-15 15:53:19.798199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.863 [2024-05-15 15:53:19.798222] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.863 [2024-05-15 15:53:19.798237] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.863 [2024-05-15 15:53:19.798265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.863 qpair failed and we were unable to recover it. 00:35:06.863 [2024-05-15 15:53:19.808153] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.863 [2024-05-15 15:53:19.808283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.863 [2024-05-15 15:53:19.808309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.863 [2024-05-15 15:53:19.808324] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.863 [2024-05-15 15:53:19.808337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.863 [2024-05-15 15:53:19.808365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.863 qpair failed and we were unable to recover it. 00:35:06.863 [2024-05-15 15:53:19.818125] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.863 [2024-05-15 15:53:19.818259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.863 [2024-05-15 15:53:19.818286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.863 [2024-05-15 15:53:19.818301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.863 [2024-05-15 15:53:19.818314] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.863 [2024-05-15 15:53:19.818342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.863 qpair failed and we were unable to recover it. 00:35:06.863 [2024-05-15 15:53:19.828173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.863 [2024-05-15 15:53:19.828347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.863 [2024-05-15 15:53:19.828374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.863 [2024-05-15 15:53:19.828395] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.863 [2024-05-15 15:53:19.828408] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.863 [2024-05-15 15:53:19.828436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.863 qpair failed and we were unable to recover it. 00:35:06.863 [2024-05-15 15:53:19.838182] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.863 [2024-05-15 15:53:19.838308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.863 [2024-05-15 15:53:19.838337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.863 [2024-05-15 15:53:19.838352] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.863 [2024-05-15 15:53:19.838369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.863 [2024-05-15 15:53:19.838399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.863 qpair failed and we were unable to recover it. 00:35:06.863 [2024-05-15 15:53:19.848196] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.863 [2024-05-15 15:53:19.848325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.863 [2024-05-15 15:53:19.848352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.863 [2024-05-15 15:53:19.848367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.863 [2024-05-15 15:53:19.848380] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.863 [2024-05-15 15:53:19.848409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.863 qpair failed and we were unable to recover it. 00:35:06.863 [2024-05-15 15:53:19.858252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.863 [2024-05-15 15:53:19.858367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.863 [2024-05-15 15:53:19.858394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.863 [2024-05-15 15:53:19.858409] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.863 [2024-05-15 15:53:19.858422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.863 [2024-05-15 15:53:19.858451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.863 qpair failed and we were unable to recover it. 00:35:06.863 [2024-05-15 15:53:19.868387] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.863 [2024-05-15 15:53:19.868509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.863 [2024-05-15 15:53:19.868537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.863 [2024-05-15 15:53:19.868552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.863 [2024-05-15 15:53:19.868564] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.863 [2024-05-15 15:53:19.868593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.863 qpair failed and we were unable to recover it. 00:35:06.863 [2024-05-15 15:53:19.878302] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.863 [2024-05-15 15:53:19.878415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.863 [2024-05-15 15:53:19.878442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.863 [2024-05-15 15:53:19.878457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.863 [2024-05-15 15:53:19.878470] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.863 [2024-05-15 15:53:19.878498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.863 qpair failed and we were unable to recover it. 00:35:06.863 [2024-05-15 15:53:19.888297] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.863 [2024-05-15 15:53:19.888408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.863 [2024-05-15 15:53:19.888434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.863 [2024-05-15 15:53:19.888449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.863 [2024-05-15 15:53:19.888462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.863 [2024-05-15 15:53:19.888490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.863 qpair failed and we were unable to recover it. 00:35:06.863 [2024-05-15 15:53:19.898419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.863 [2024-05-15 15:53:19.898527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.863 [2024-05-15 15:53:19.898555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.863 [2024-05-15 15:53:19.898570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.864 [2024-05-15 15:53:19.898583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.864 [2024-05-15 15:53:19.898611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.864 qpair failed and we were unable to recover it. 00:35:06.864 [2024-05-15 15:53:19.908406] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.864 [2024-05-15 15:53:19.908523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.864 [2024-05-15 15:53:19.908550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.864 [2024-05-15 15:53:19.908565] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.864 [2024-05-15 15:53:19.908578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.864 [2024-05-15 15:53:19.908605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.864 qpair failed and we were unable to recover it. 00:35:06.864 [2024-05-15 15:53:19.918402] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.864 [2024-05-15 15:53:19.918517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.864 [2024-05-15 15:53:19.918543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.864 [2024-05-15 15:53:19.918564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.864 [2024-05-15 15:53:19.918578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.864 [2024-05-15 15:53:19.918606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.864 qpair failed and we were unable to recover it. 00:35:06.864 [2024-05-15 15:53:19.928438] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.864 [2024-05-15 15:53:19.928555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.864 [2024-05-15 15:53:19.928582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.864 [2024-05-15 15:53:19.928600] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.864 [2024-05-15 15:53:19.928613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.864 [2024-05-15 15:53:19.928642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.864 qpair failed and we were unable to recover it. 00:35:06.864 [2024-05-15 15:53:19.938440] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.864 [2024-05-15 15:53:19.938551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.864 [2024-05-15 15:53:19.938578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.864 [2024-05-15 15:53:19.938593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.864 [2024-05-15 15:53:19.938606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.864 [2024-05-15 15:53:19.938634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.864 qpair failed and we were unable to recover it. 00:35:06.864 [2024-05-15 15:53:19.948525] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.864 [2024-05-15 15:53:19.948649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.864 [2024-05-15 15:53:19.948676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.864 [2024-05-15 15:53:19.948691] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.864 [2024-05-15 15:53:19.948705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.864 [2024-05-15 15:53:19.948733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.864 qpair failed and we were unable to recover it. 00:35:06.864 [2024-05-15 15:53:19.958579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:06.864 [2024-05-15 15:53:19.958710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:06.864 [2024-05-15 15:53:19.958736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:06.864 [2024-05-15 15:53:19.958751] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:06.864 [2024-05-15 15:53:19.958764] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:06.864 [2024-05-15 15:53:19.958792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:06.864 qpair failed and we were unable to recover it. 00:35:07.123 [2024-05-15 15:53:19.968539] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.123 [2024-05-15 15:53:19.968674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.123 [2024-05-15 15:53:19.968702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.123 [2024-05-15 15:53:19.968717] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.123 [2024-05-15 15:53:19.968730] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.123 [2024-05-15 15:53:19.968758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.123 qpair failed and we were unable to recover it. 00:35:07.123 [2024-05-15 15:53:19.978565] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.123 [2024-05-15 15:53:19.978705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.123 [2024-05-15 15:53:19.978732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.123 [2024-05-15 15:53:19.978748] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.123 [2024-05-15 15:53:19.978761] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.123 [2024-05-15 15:53:19.978789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.123 qpair failed and we were unable to recover it. 00:35:07.123 [2024-05-15 15:53:19.988632] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.123 [2024-05-15 15:53:19.988747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.123 [2024-05-15 15:53:19.988775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.123 [2024-05-15 15:53:19.988790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.123 [2024-05-15 15:53:19.988803] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.123 [2024-05-15 15:53:19.988832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.123 qpair failed and we were unable to recover it. 00:35:07.123 [2024-05-15 15:53:19.998626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.123 [2024-05-15 15:53:19.998747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.123 [2024-05-15 15:53:19.998773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.123 [2024-05-15 15:53:19.998789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.123 [2024-05-15 15:53:19.998802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.123 [2024-05-15 15:53:19.998830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.123 qpair failed and we were unable to recover it. 00:35:07.123 [2024-05-15 15:53:20.008666] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.123 [2024-05-15 15:53:20.008798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.123 [2024-05-15 15:53:20.008834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.123 [2024-05-15 15:53:20.008852] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.123 [2024-05-15 15:53:20.008865] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.123 [2024-05-15 15:53:20.008895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.123 qpair failed and we were unable to recover it. 00:35:07.123 [2024-05-15 15:53:20.018713] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.123 [2024-05-15 15:53:20.018858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.123 [2024-05-15 15:53:20.018887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.123 [2024-05-15 15:53:20.018903] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.123 [2024-05-15 15:53:20.018916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.123 [2024-05-15 15:53:20.018945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.123 qpair failed and we were unable to recover it. 00:35:07.123 [2024-05-15 15:53:20.028754] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.123 [2024-05-15 15:53:20.028916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.123 [2024-05-15 15:53:20.028944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.123 [2024-05-15 15:53:20.028959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.123 [2024-05-15 15:53:20.028972] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.123 [2024-05-15 15:53:20.029001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.123 qpair failed and we were unable to recover it. 00:35:07.123 [2024-05-15 15:53:20.038749] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.123 [2024-05-15 15:53:20.038879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.124 [2024-05-15 15:53:20.038906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.124 [2024-05-15 15:53:20.038921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.124 [2024-05-15 15:53:20.038935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.124 [2024-05-15 15:53:20.038963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.124 qpair failed and we were unable to recover it. 00:35:07.124 [2024-05-15 15:53:20.048787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.124 [2024-05-15 15:53:20.048911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.124 [2024-05-15 15:53:20.048939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.124 [2024-05-15 15:53:20.048954] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.124 [2024-05-15 15:53:20.048967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.124 [2024-05-15 15:53:20.049004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.124 qpair failed and we were unable to recover it. 00:35:07.124 [2024-05-15 15:53:20.058970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.124 [2024-05-15 15:53:20.059138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.124 [2024-05-15 15:53:20.059165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.124 [2024-05-15 15:53:20.059181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.124 [2024-05-15 15:53:20.059193] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.124 [2024-05-15 15:53:20.059233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.124 qpair failed and we were unable to recover it. 00:35:07.124 [2024-05-15 15:53:20.068830] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.124 [2024-05-15 15:53:20.068958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.124 [2024-05-15 15:53:20.068984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.124 [2024-05-15 15:53:20.069000] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.124 [2024-05-15 15:53:20.069013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.124 [2024-05-15 15:53:20.069040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.124 qpair failed and we were unable to recover it. 00:35:07.124 [2024-05-15 15:53:20.078870] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.124 [2024-05-15 15:53:20.079004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.124 [2024-05-15 15:53:20.079032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.124 [2024-05-15 15:53:20.079047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.124 [2024-05-15 15:53:20.079060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.124 [2024-05-15 15:53:20.079101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.124 qpair failed and we were unable to recover it. 00:35:07.124 [2024-05-15 15:53:20.088885] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.124 [2024-05-15 15:53:20.089019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.124 [2024-05-15 15:53:20.089047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.124 [2024-05-15 15:53:20.089062] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.124 [2024-05-15 15:53:20.089075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.124 [2024-05-15 15:53:20.089104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.124 qpair failed and we were unable to recover it. 00:35:07.124 [2024-05-15 15:53:20.099022] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.124 [2024-05-15 15:53:20.099163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.124 [2024-05-15 15:53:20.099210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.124 [2024-05-15 15:53:20.099237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.124 [2024-05-15 15:53:20.099251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.124 [2024-05-15 15:53:20.099282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.124 qpair failed and we were unable to recover it. 00:35:07.124 [2024-05-15 15:53:20.108946] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.124 [2024-05-15 15:53:20.109081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.124 [2024-05-15 15:53:20.109109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.124 [2024-05-15 15:53:20.109124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.124 [2024-05-15 15:53:20.109137] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.124 [2024-05-15 15:53:20.109165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.124 qpair failed and we were unable to recover it. 00:35:07.124 [2024-05-15 15:53:20.119052] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.124 [2024-05-15 15:53:20.119179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.124 [2024-05-15 15:53:20.119211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.124 [2024-05-15 15:53:20.119233] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.124 [2024-05-15 15:53:20.119246] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.124 [2024-05-15 15:53:20.119275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.124 qpair failed and we were unable to recover it. 00:35:07.124 [2024-05-15 15:53:20.129014] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.124 [2024-05-15 15:53:20.129135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.124 [2024-05-15 15:53:20.129163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.124 [2024-05-15 15:53:20.129181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.124 [2024-05-15 15:53:20.129204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.124 [2024-05-15 15:53:20.129239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.124 qpair failed and we were unable to recover it. 00:35:07.124 [2024-05-15 15:53:20.139028] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.124 [2024-05-15 15:53:20.139147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.124 [2024-05-15 15:53:20.139174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.124 [2024-05-15 15:53:20.139189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.124 [2024-05-15 15:53:20.139209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.124 [2024-05-15 15:53:20.139251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.124 qpair failed and we were unable to recover it. 00:35:07.124 [2024-05-15 15:53:20.149075] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.124 [2024-05-15 15:53:20.149209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.124 [2024-05-15 15:53:20.149242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.124 [2024-05-15 15:53:20.149257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.124 [2024-05-15 15:53:20.149269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.124 [2024-05-15 15:53:20.149297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.124 qpair failed and we were unable to recover it. 00:35:07.124 [2024-05-15 15:53:20.159067] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.124 [2024-05-15 15:53:20.159184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.124 [2024-05-15 15:53:20.159344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.124 [2024-05-15 15:53:20.159370] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.124 [2024-05-15 15:53:20.159384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.124 [2024-05-15 15:53:20.159413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.124 qpair failed and we were unable to recover it. 00:35:07.124 [2024-05-15 15:53:20.169162] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.124 [2024-05-15 15:53:20.169336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.124 [2024-05-15 15:53:20.169363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.124 [2024-05-15 15:53:20.169378] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.124 [2024-05-15 15:53:20.169391] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.124 [2024-05-15 15:53:20.169419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.124 qpair failed and we were unable to recover it. 00:35:07.124 [2024-05-15 15:53:20.179163] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.125 [2024-05-15 15:53:20.179298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.125 [2024-05-15 15:53:20.179325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.125 [2024-05-15 15:53:20.179340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.125 [2024-05-15 15:53:20.179353] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.125 [2024-05-15 15:53:20.179381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.125 qpair failed and we were unable to recover it. 00:35:07.125 [2024-05-15 15:53:20.189205] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.125 [2024-05-15 15:53:20.189385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.125 [2024-05-15 15:53:20.189418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.125 [2024-05-15 15:53:20.189436] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.125 [2024-05-15 15:53:20.189452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.125 [2024-05-15 15:53:20.189481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.125 qpair failed and we were unable to recover it. 00:35:07.125 [2024-05-15 15:53:20.199230] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.125 [2024-05-15 15:53:20.199392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.125 [2024-05-15 15:53:20.199419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.125 [2024-05-15 15:53:20.199434] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.125 [2024-05-15 15:53:20.199447] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.125 [2024-05-15 15:53:20.199476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.125 qpair failed and we were unable to recover it. 00:35:07.125 [2024-05-15 15:53:20.209252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.125 [2024-05-15 15:53:20.209375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.125 [2024-05-15 15:53:20.209402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.125 [2024-05-15 15:53:20.209417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.125 [2024-05-15 15:53:20.209430] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.125 [2024-05-15 15:53:20.209459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.125 qpair failed and we were unable to recover it. 00:35:07.125 [2024-05-15 15:53:20.219246] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.125 [2024-05-15 15:53:20.219385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.125 [2024-05-15 15:53:20.219410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.125 [2024-05-15 15:53:20.219425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.125 [2024-05-15 15:53:20.219438] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.125 [2024-05-15 15:53:20.219466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.125 qpair failed and we were unable to recover it. 00:35:07.383 [2024-05-15 15:53:20.229414] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.383 [2024-05-15 15:53:20.229541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.383 [2024-05-15 15:53:20.229568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.383 [2024-05-15 15:53:20.229584] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.383 [2024-05-15 15:53:20.229597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.383 [2024-05-15 15:53:20.229631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.383 qpair failed and we were unable to recover it. 00:35:07.383 [2024-05-15 15:53:20.239363] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.384 [2024-05-15 15:53:20.239492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.384 [2024-05-15 15:53:20.239519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.384 [2024-05-15 15:53:20.239534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.384 [2024-05-15 15:53:20.239547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.384 [2024-05-15 15:53:20.239575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.384 qpair failed and we were unable to recover it. 00:35:07.384 [2024-05-15 15:53:20.249322] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.384 [2024-05-15 15:53:20.249431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.384 [2024-05-15 15:53:20.249457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.384 [2024-05-15 15:53:20.249473] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.384 [2024-05-15 15:53:20.249486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.384 [2024-05-15 15:53:20.249514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.384 qpair failed and we were unable to recover it. 00:35:07.384 [2024-05-15 15:53:20.259381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.384 [2024-05-15 15:53:20.259504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.384 [2024-05-15 15:53:20.259531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.384 [2024-05-15 15:53:20.259546] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.384 [2024-05-15 15:53:20.259558] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.384 [2024-05-15 15:53:20.259587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.384 qpair failed and we were unable to recover it. 00:35:07.384 [2024-05-15 15:53:20.269429] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.384 [2024-05-15 15:53:20.269553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.384 [2024-05-15 15:53:20.269580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.384 [2024-05-15 15:53:20.269595] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.384 [2024-05-15 15:53:20.269608] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.384 [2024-05-15 15:53:20.269636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.384 qpair failed and we were unable to recover it. 00:35:07.384 [2024-05-15 15:53:20.279412] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.384 [2024-05-15 15:53:20.279530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.384 [2024-05-15 15:53:20.279561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.384 [2024-05-15 15:53:20.279578] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.384 [2024-05-15 15:53:20.279590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.384 [2024-05-15 15:53:20.279619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.384 qpair failed and we were unable to recover it. 00:35:07.384 [2024-05-15 15:53:20.289457] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.384 [2024-05-15 15:53:20.289587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.384 [2024-05-15 15:53:20.289614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.384 [2024-05-15 15:53:20.289629] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.384 [2024-05-15 15:53:20.289642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.384 [2024-05-15 15:53:20.289670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.384 qpair failed and we were unable to recover it. 00:35:07.384 [2024-05-15 15:53:20.299500] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.384 [2024-05-15 15:53:20.299634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.384 [2024-05-15 15:53:20.299660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.384 [2024-05-15 15:53:20.299675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.384 [2024-05-15 15:53:20.299688] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.384 [2024-05-15 15:53:20.299716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.384 qpair failed and we were unable to recover it. 00:35:07.384 [2024-05-15 15:53:20.309557] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.384 [2024-05-15 15:53:20.309717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.384 [2024-05-15 15:53:20.309744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.384 [2024-05-15 15:53:20.309759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.384 [2024-05-15 15:53:20.309771] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.384 [2024-05-15 15:53:20.309799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.384 qpair failed and we were unable to recover it. 00:35:07.384 [2024-05-15 15:53:20.319537] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.384 [2024-05-15 15:53:20.319658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.384 [2024-05-15 15:53:20.319684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.384 [2024-05-15 15:53:20.319699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.384 [2024-05-15 15:53:20.319717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.384 [2024-05-15 15:53:20.319746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.384 qpair failed and we were unable to recover it. 00:35:07.384 [2024-05-15 15:53:20.329562] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.384 [2024-05-15 15:53:20.329685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.384 [2024-05-15 15:53:20.329710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.384 [2024-05-15 15:53:20.329725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.384 [2024-05-15 15:53:20.329738] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.384 [2024-05-15 15:53:20.329776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.384 qpair failed and we were unable to recover it. 00:35:07.384 [2024-05-15 15:53:20.339593] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.384 [2024-05-15 15:53:20.339742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.384 [2024-05-15 15:53:20.339769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.384 [2024-05-15 15:53:20.339784] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.384 [2024-05-15 15:53:20.339797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.384 [2024-05-15 15:53:20.339824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.384 qpair failed and we were unable to recover it. 00:35:07.384 [2024-05-15 15:53:20.349622] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.384 [2024-05-15 15:53:20.349741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.384 [2024-05-15 15:53:20.349767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.384 [2024-05-15 15:53:20.349782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.384 [2024-05-15 15:53:20.349795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.384 [2024-05-15 15:53:20.349823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.384 qpair failed and we were unable to recover it. 00:35:07.384 [2024-05-15 15:53:20.359714] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.384 [2024-05-15 15:53:20.359862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.384 [2024-05-15 15:53:20.359889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.384 [2024-05-15 15:53:20.359904] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.384 [2024-05-15 15:53:20.359917] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.384 [2024-05-15 15:53:20.359945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.384 qpair failed and we were unable to recover it. 00:35:07.384 [2024-05-15 15:53:20.369677] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.384 [2024-05-15 15:53:20.369811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.384 [2024-05-15 15:53:20.369838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.384 [2024-05-15 15:53:20.369853] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.384 [2024-05-15 15:53:20.369866] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.384 [2024-05-15 15:53:20.369894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.384 qpair failed and we were unable to recover it. 00:35:07.384 [2024-05-15 15:53:20.379719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.385 [2024-05-15 15:53:20.379851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.385 [2024-05-15 15:53:20.379877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.385 [2024-05-15 15:53:20.379892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.385 [2024-05-15 15:53:20.379905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.385 [2024-05-15 15:53:20.379933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.385 qpair failed and we were unable to recover it. 00:35:07.385 [2024-05-15 15:53:20.389793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.385 [2024-05-15 15:53:20.389915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.385 [2024-05-15 15:53:20.389941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.385 [2024-05-15 15:53:20.389956] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.385 [2024-05-15 15:53:20.389969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.385 [2024-05-15 15:53:20.389997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.385 qpair failed and we were unable to recover it. 00:35:07.385 [2024-05-15 15:53:20.399850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.385 [2024-05-15 15:53:20.399975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.385 [2024-05-15 15:53:20.400001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.385 [2024-05-15 15:53:20.400017] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.385 [2024-05-15 15:53:20.400030] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.385 [2024-05-15 15:53:20.400058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.385 qpair failed and we were unable to recover it. 00:35:07.385 [2024-05-15 15:53:20.409903] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.385 [2024-05-15 15:53:20.410028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.385 [2024-05-15 15:53:20.410063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.385 [2024-05-15 15:53:20.410081] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.385 [2024-05-15 15:53:20.410100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.385 [2024-05-15 15:53:20.410129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.385 qpair failed and we were unable to recover it. 00:35:07.385 [2024-05-15 15:53:20.419838] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.385 [2024-05-15 15:53:20.420016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.385 [2024-05-15 15:53:20.420045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.385 [2024-05-15 15:53:20.420061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.385 [2024-05-15 15:53:20.420073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.385 [2024-05-15 15:53:20.420102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.385 qpair failed and we were unable to recover it. 00:35:07.385 [2024-05-15 15:53:20.429867] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.385 [2024-05-15 15:53:20.430001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.385 [2024-05-15 15:53:20.430028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.385 [2024-05-15 15:53:20.430044] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.385 [2024-05-15 15:53:20.430057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.385 [2024-05-15 15:53:20.430085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.385 qpair failed and we were unable to recover it. 00:35:07.385 [2024-05-15 15:53:20.439897] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.385 [2024-05-15 15:53:20.440069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.385 [2024-05-15 15:53:20.440096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.385 [2024-05-15 15:53:20.440111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.385 [2024-05-15 15:53:20.440124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.385 [2024-05-15 15:53:20.440153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.385 qpair failed and we were unable to recover it. 00:35:07.385 [2024-05-15 15:53:20.449918] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.385 [2024-05-15 15:53:20.450042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.385 [2024-05-15 15:53:20.450069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.385 [2024-05-15 15:53:20.450087] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.385 [2024-05-15 15:53:20.450100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.385 [2024-05-15 15:53:20.450129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.385 qpair failed and we were unable to recover it. 00:35:07.385 [2024-05-15 15:53:20.460029] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.385 [2024-05-15 15:53:20.460153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.385 [2024-05-15 15:53:20.460181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.385 [2024-05-15 15:53:20.460196] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.385 [2024-05-15 15:53:20.460209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.385 [2024-05-15 15:53:20.460246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.385 qpair failed and we were unable to recover it. 00:35:07.385 [2024-05-15 15:53:20.469987] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.385 [2024-05-15 15:53:20.470124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.385 [2024-05-15 15:53:20.470151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.385 [2024-05-15 15:53:20.470167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.385 [2024-05-15 15:53:20.470179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.385 [2024-05-15 15:53:20.470207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.385 qpair failed and we were unable to recover it. 00:35:07.385 [2024-05-15 15:53:20.479992] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.385 [2024-05-15 15:53:20.480113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.385 [2024-05-15 15:53:20.480139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.385 [2024-05-15 15:53:20.480154] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.385 [2024-05-15 15:53:20.480166] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.385 [2024-05-15 15:53:20.480195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.385 qpair failed and we were unable to recover it. 00:35:07.644 [2024-05-15 15:53:20.490021] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.644 [2024-05-15 15:53:20.490133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.644 [2024-05-15 15:53:20.490160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.644 [2024-05-15 15:53:20.490176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.644 [2024-05-15 15:53:20.490189] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.644 [2024-05-15 15:53:20.490235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.644 qpair failed and we were unable to recover it. 00:35:07.644 [2024-05-15 15:53:20.500062] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.644 [2024-05-15 15:53:20.500253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.644 [2024-05-15 15:53:20.500282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.644 [2024-05-15 15:53:20.500299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.644 [2024-05-15 15:53:20.500317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.644 [2024-05-15 15:53:20.500347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.644 qpair failed and we were unable to recover it. 00:35:07.644 [2024-05-15 15:53:20.510142] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.644 [2024-05-15 15:53:20.510284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.644 [2024-05-15 15:53:20.510311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.644 [2024-05-15 15:53:20.510326] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.644 [2024-05-15 15:53:20.510339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.644 [2024-05-15 15:53:20.510368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.644 qpair failed and we were unable to recover it. 00:35:07.645 [2024-05-15 15:53:20.520169] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.645 [2024-05-15 15:53:20.520340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.645 [2024-05-15 15:53:20.520367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.645 [2024-05-15 15:53:20.520382] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.645 [2024-05-15 15:53:20.520395] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.645 [2024-05-15 15:53:20.520423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.645 qpair failed and we were unable to recover it. 00:35:07.645 [2024-05-15 15:53:20.530262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.645 [2024-05-15 15:53:20.530377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.645 [2024-05-15 15:53:20.530404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.645 [2024-05-15 15:53:20.530420] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.645 [2024-05-15 15:53:20.530433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.645 [2024-05-15 15:53:20.530461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.645 qpair failed and we were unable to recover it. 00:35:07.645 [2024-05-15 15:53:20.540206] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.645 [2024-05-15 15:53:20.540326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.645 [2024-05-15 15:53:20.540356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.645 [2024-05-15 15:53:20.540375] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.645 [2024-05-15 15:53:20.540388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.645 [2024-05-15 15:53:20.540417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.645 qpair failed and we were unable to recover it. 00:35:07.645 [2024-05-15 15:53:20.550260] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.645 [2024-05-15 15:53:20.550395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.645 [2024-05-15 15:53:20.550421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.645 [2024-05-15 15:53:20.550437] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.645 [2024-05-15 15:53:20.550449] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.645 [2024-05-15 15:53:20.550477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.645 qpair failed and we were unable to recover it. 00:35:07.645 [2024-05-15 15:53:20.560243] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.645 [2024-05-15 15:53:20.560358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.645 [2024-05-15 15:53:20.560385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.645 [2024-05-15 15:53:20.560401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.645 [2024-05-15 15:53:20.560413] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.645 [2024-05-15 15:53:20.560442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.645 qpair failed and we were unable to recover it. 00:35:07.645 [2024-05-15 15:53:20.570354] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.645 [2024-05-15 15:53:20.570471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.645 [2024-05-15 15:53:20.570498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.645 [2024-05-15 15:53:20.570525] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.645 [2024-05-15 15:53:20.570538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.645 [2024-05-15 15:53:20.570567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.645 qpair failed and we were unable to recover it. 00:35:07.645 [2024-05-15 15:53:20.580271] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.645 [2024-05-15 15:53:20.580397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.645 [2024-05-15 15:53:20.580424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.645 [2024-05-15 15:53:20.580439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.645 [2024-05-15 15:53:20.580452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.645 [2024-05-15 15:53:20.580481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.645 qpair failed and we were unable to recover it. 00:35:07.645 [2024-05-15 15:53:20.590310] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.645 [2024-05-15 15:53:20.590431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.645 [2024-05-15 15:53:20.590458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.645 [2024-05-15 15:53:20.590478] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.645 [2024-05-15 15:53:20.590492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.645 [2024-05-15 15:53:20.590520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.645 qpair failed and we were unable to recover it. 00:35:07.645 [2024-05-15 15:53:20.600339] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.645 [2024-05-15 15:53:20.600461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.645 [2024-05-15 15:53:20.600488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.645 [2024-05-15 15:53:20.600503] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.645 [2024-05-15 15:53:20.600516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.645 [2024-05-15 15:53:20.600556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.645 qpair failed and we were unable to recover it. 00:35:07.645 [2024-05-15 15:53:20.610386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.645 [2024-05-15 15:53:20.610506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.645 [2024-05-15 15:53:20.610532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.645 [2024-05-15 15:53:20.610548] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.645 [2024-05-15 15:53:20.610561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.645 [2024-05-15 15:53:20.610589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.645 qpair failed and we were unable to recover it. 00:35:07.645 [2024-05-15 15:53:20.620386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.645 [2024-05-15 15:53:20.620535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.645 [2024-05-15 15:53:20.620561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.645 [2024-05-15 15:53:20.620576] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.645 [2024-05-15 15:53:20.620589] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.645 [2024-05-15 15:53:20.620617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.645 qpair failed and we were unable to recover it. 00:35:07.645 [2024-05-15 15:53:20.630452] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.645 [2024-05-15 15:53:20.630625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.645 [2024-05-15 15:53:20.630652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.645 [2024-05-15 15:53:20.630668] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.645 [2024-05-15 15:53:20.630680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.645 [2024-05-15 15:53:20.630708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.645 qpair failed and we were unable to recover it. 00:35:07.645 [2024-05-15 15:53:20.640458] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.645 [2024-05-15 15:53:20.640587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.645 [2024-05-15 15:53:20.640614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.645 [2024-05-15 15:53:20.640629] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.645 [2024-05-15 15:53:20.640642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.645 [2024-05-15 15:53:20.640670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.645 qpair failed and we were unable to recover it. 00:35:07.645 [2024-05-15 15:53:20.650494] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.645 [2024-05-15 15:53:20.650629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.645 [2024-05-15 15:53:20.650657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.645 [2024-05-15 15:53:20.650675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.645 [2024-05-15 15:53:20.650690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.646 [2024-05-15 15:53:20.650719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.646 qpair failed and we were unable to recover it. 00:35:07.646 [2024-05-15 15:53:20.660498] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.646 [2024-05-15 15:53:20.660640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.646 [2024-05-15 15:53:20.660668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.646 [2024-05-15 15:53:20.660683] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.646 [2024-05-15 15:53:20.660696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.646 [2024-05-15 15:53:20.660725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.646 qpair failed and we were unable to recover it. 00:35:07.646 [2024-05-15 15:53:20.670518] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.646 [2024-05-15 15:53:20.670637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.646 [2024-05-15 15:53:20.670663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.646 [2024-05-15 15:53:20.670678] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.646 [2024-05-15 15:53:20.670691] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.646 [2024-05-15 15:53:20.670719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.646 qpair failed and we were unable to recover it. 00:35:07.646 [2024-05-15 15:53:20.680540] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.646 [2024-05-15 15:53:20.680662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.646 [2024-05-15 15:53:20.680688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.646 [2024-05-15 15:53:20.680709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.646 [2024-05-15 15:53:20.680723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.646 [2024-05-15 15:53:20.680752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.646 qpair failed and we were unable to recover it. 00:35:07.646 [2024-05-15 15:53:20.690570] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.646 [2024-05-15 15:53:20.690692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.646 [2024-05-15 15:53:20.690719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.646 [2024-05-15 15:53:20.690734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.646 [2024-05-15 15:53:20.690747] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.646 [2024-05-15 15:53:20.690777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.646 qpair failed and we were unable to recover it. 00:35:07.646 [2024-05-15 15:53:20.700588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.646 [2024-05-15 15:53:20.700708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.646 [2024-05-15 15:53:20.700734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.646 [2024-05-15 15:53:20.700749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.646 [2024-05-15 15:53:20.700762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.646 [2024-05-15 15:53:20.700790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.646 qpair failed and we were unable to recover it. 00:35:07.646 [2024-05-15 15:53:20.710628] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.646 [2024-05-15 15:53:20.710755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.646 [2024-05-15 15:53:20.710781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.646 [2024-05-15 15:53:20.710797] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.646 [2024-05-15 15:53:20.710809] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.646 [2024-05-15 15:53:20.710837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.646 qpair failed and we were unable to recover it. 00:35:07.646 [2024-05-15 15:53:20.720647] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.646 [2024-05-15 15:53:20.720759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.646 [2024-05-15 15:53:20.720786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.646 [2024-05-15 15:53:20.720801] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.646 [2024-05-15 15:53:20.720814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.646 [2024-05-15 15:53:20.720842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.646 qpair failed and we were unable to recover it. 00:35:07.646 [2024-05-15 15:53:20.730714] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.646 [2024-05-15 15:53:20.730841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.646 [2024-05-15 15:53:20.730868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.646 [2024-05-15 15:53:20.730884] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.646 [2024-05-15 15:53:20.730896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.646 [2024-05-15 15:53:20.730924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.646 qpair failed and we were unable to recover it. 00:35:07.646 [2024-05-15 15:53:20.740700] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.646 [2024-05-15 15:53:20.740814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.646 [2024-05-15 15:53:20.740841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.646 [2024-05-15 15:53:20.740857] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.646 [2024-05-15 15:53:20.740869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.646 [2024-05-15 15:53:20.740899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.646 qpair failed and we were unable to recover it. 00:35:07.905 [2024-05-15 15:53:20.750747] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.905 [2024-05-15 15:53:20.750875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.905 [2024-05-15 15:53:20.750902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.905 [2024-05-15 15:53:20.750918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.905 [2024-05-15 15:53:20.750931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.905 [2024-05-15 15:53:20.750959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.905 qpair failed and we were unable to recover it. 00:35:07.905 [2024-05-15 15:53:20.760801] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.905 [2024-05-15 15:53:20.760919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.905 [2024-05-15 15:53:20.760945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.905 [2024-05-15 15:53:20.760961] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.905 [2024-05-15 15:53:20.760973] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.905 [2024-05-15 15:53:20.761002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.905 qpair failed and we were unable to recover it. 00:35:07.905 [2024-05-15 15:53:20.770792] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.905 [2024-05-15 15:53:20.770913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.905 [2024-05-15 15:53:20.770939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.906 [2024-05-15 15:53:20.770960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.906 [2024-05-15 15:53:20.770974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.906 [2024-05-15 15:53:20.771002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.906 qpair failed and we were unable to recover it. 00:35:07.906 [2024-05-15 15:53:20.780845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.906 [2024-05-15 15:53:20.780979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.906 [2024-05-15 15:53:20.781004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.906 [2024-05-15 15:53:20.781019] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.906 [2024-05-15 15:53:20.781032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.906 [2024-05-15 15:53:20.781061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.906 qpair failed and we were unable to recover it. 00:35:07.906 [2024-05-15 15:53:20.790847] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.906 [2024-05-15 15:53:20.790961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.906 [2024-05-15 15:53:20.790988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.906 [2024-05-15 15:53:20.791003] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.906 [2024-05-15 15:53:20.791015] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.906 [2024-05-15 15:53:20.791043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.906 qpair failed and we were unable to recover it. 00:35:07.906 [2024-05-15 15:53:20.800903] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.906 [2024-05-15 15:53:20.801022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.906 [2024-05-15 15:53:20.801048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.906 [2024-05-15 15:53:20.801064] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.906 [2024-05-15 15:53:20.801076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.906 [2024-05-15 15:53:20.801104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.906 qpair failed and we were unable to recover it. 00:35:07.906 [2024-05-15 15:53:20.810920] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.906 [2024-05-15 15:53:20.811038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.906 [2024-05-15 15:53:20.811065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.906 [2024-05-15 15:53:20.811080] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.906 [2024-05-15 15:53:20.811093] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.906 [2024-05-15 15:53:20.811121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.906 qpair failed and we were unable to recover it. 00:35:07.906 [2024-05-15 15:53:20.820958] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.906 [2024-05-15 15:53:20.821083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.906 [2024-05-15 15:53:20.821110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.906 [2024-05-15 15:53:20.821126] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.906 [2024-05-15 15:53:20.821138] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.906 [2024-05-15 15:53:20.821166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.906 qpair failed and we were unable to recover it. 00:35:07.906 [2024-05-15 15:53:20.830956] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.906 [2024-05-15 15:53:20.831086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.906 [2024-05-15 15:53:20.831111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.906 [2024-05-15 15:53:20.831125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.906 [2024-05-15 15:53:20.831139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.906 [2024-05-15 15:53:20.831167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.906 qpair failed and we were unable to recover it. 00:35:07.906 [2024-05-15 15:53:20.841022] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.906 [2024-05-15 15:53:20.841146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.906 [2024-05-15 15:53:20.841172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.906 [2024-05-15 15:53:20.841188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.906 [2024-05-15 15:53:20.841211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.906 [2024-05-15 15:53:20.841250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.906 qpair failed and we were unable to recover it. 00:35:07.906 [2024-05-15 15:53:20.851056] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.906 [2024-05-15 15:53:20.851174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.906 [2024-05-15 15:53:20.851211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.906 [2024-05-15 15:53:20.851234] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.906 [2024-05-15 15:53:20.851247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.906 [2024-05-15 15:53:20.851276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.906 qpair failed and we were unable to recover it. 00:35:07.906 [2024-05-15 15:53:20.861067] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.906 [2024-05-15 15:53:20.861188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.906 [2024-05-15 15:53:20.861221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.906 [2024-05-15 15:53:20.861243] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.906 [2024-05-15 15:53:20.861256] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.906 [2024-05-15 15:53:20.861284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.906 qpair failed and we were unable to recover it. 00:35:07.906 [2024-05-15 15:53:20.871083] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.906 [2024-05-15 15:53:20.871263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.906 [2024-05-15 15:53:20.871290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.906 [2024-05-15 15:53:20.871306] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.906 [2024-05-15 15:53:20.871319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.906 [2024-05-15 15:53:20.871347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.906 qpair failed and we were unable to recover it. 00:35:07.906 [2024-05-15 15:53:20.881136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.906 [2024-05-15 15:53:20.881261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.906 [2024-05-15 15:53:20.881286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.906 [2024-05-15 15:53:20.881301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.906 [2024-05-15 15:53:20.881314] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.906 [2024-05-15 15:53:20.881342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.906 qpair failed and we were unable to recover it. 00:35:07.906 [2024-05-15 15:53:20.891148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.906 [2024-05-15 15:53:20.891281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.906 [2024-05-15 15:53:20.891306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.906 [2024-05-15 15:53:20.891321] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.906 [2024-05-15 15:53:20.891334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.906 [2024-05-15 15:53:20.891363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.906 qpair failed and we were unable to recover it. 00:35:07.906 [2024-05-15 15:53:20.901169] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.906 [2024-05-15 15:53:20.901294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.906 [2024-05-15 15:53:20.901319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.906 [2024-05-15 15:53:20.901333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.906 [2024-05-15 15:53:20.901347] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.906 [2024-05-15 15:53:20.901375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.906 qpair failed and we were unable to recover it. 00:35:07.906 [2024-05-15 15:53:20.911246] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.906 [2024-05-15 15:53:20.911402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.907 [2024-05-15 15:53:20.911427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.907 [2024-05-15 15:53:20.911442] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.907 [2024-05-15 15:53:20.911455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.907 [2024-05-15 15:53:20.911483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.907 qpair failed and we were unable to recover it. 00:35:07.907 [2024-05-15 15:53:20.921226] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.907 [2024-05-15 15:53:20.921346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.907 [2024-05-15 15:53:20.921371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.907 [2024-05-15 15:53:20.921386] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.907 [2024-05-15 15:53:20.921403] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.907 [2024-05-15 15:53:20.921432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.907 qpair failed and we were unable to recover it. 00:35:07.907 [2024-05-15 15:53:20.931284] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.907 [2024-05-15 15:53:20.931403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.907 [2024-05-15 15:53:20.931429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.907 [2024-05-15 15:53:20.931444] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.907 [2024-05-15 15:53:20.931458] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.907 [2024-05-15 15:53:20.931487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.907 qpair failed and we were unable to recover it. 00:35:07.907 [2024-05-15 15:53:20.941278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.907 [2024-05-15 15:53:20.941392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.907 [2024-05-15 15:53:20.941418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.907 [2024-05-15 15:53:20.941433] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.907 [2024-05-15 15:53:20.941446] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.907 [2024-05-15 15:53:20.941475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.907 qpair failed and we were unable to recover it. 00:35:07.907 [2024-05-15 15:53:20.951331] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.907 [2024-05-15 15:53:20.951499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.907 [2024-05-15 15:53:20.951530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.907 [2024-05-15 15:53:20.951548] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.907 [2024-05-15 15:53:20.951563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.907 [2024-05-15 15:53:20.951592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.907 qpair failed and we were unable to recover it. 00:35:07.907 [2024-05-15 15:53:20.961342] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.907 [2024-05-15 15:53:20.961460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.907 [2024-05-15 15:53:20.961485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.907 [2024-05-15 15:53:20.961500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.907 [2024-05-15 15:53:20.961513] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.907 [2024-05-15 15:53:20.961542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.907 qpair failed and we were unable to recover it. 00:35:07.907 [2024-05-15 15:53:20.971359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.907 [2024-05-15 15:53:20.971483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.907 [2024-05-15 15:53:20.971509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.907 [2024-05-15 15:53:20.971524] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.907 [2024-05-15 15:53:20.971537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.907 [2024-05-15 15:53:20.971566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.907 qpair failed and we were unable to recover it. 00:35:07.907 [2024-05-15 15:53:20.981366] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.907 [2024-05-15 15:53:20.981478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.907 [2024-05-15 15:53:20.981504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.907 [2024-05-15 15:53:20.981519] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.907 [2024-05-15 15:53:20.981532] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.907 [2024-05-15 15:53:20.981560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.907 qpair failed and we were unable to recover it. 00:35:07.907 [2024-05-15 15:53:20.991420] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.907 [2024-05-15 15:53:20.991534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.907 [2024-05-15 15:53:20.991559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.907 [2024-05-15 15:53:20.991574] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.907 [2024-05-15 15:53:20.991588] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.907 [2024-05-15 15:53:20.991616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.907 qpair failed and we were unable to recover it. 00:35:07.907 [2024-05-15 15:53:21.001470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:07.907 [2024-05-15 15:53:21.001585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:07.907 [2024-05-15 15:53:21.001610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:07.907 [2024-05-15 15:53:21.001625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:07.907 [2024-05-15 15:53:21.001639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:07.907 [2024-05-15 15:53:21.001667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:07.907 qpair failed and we were unable to recover it. 00:35:08.165 [2024-05-15 15:53:21.011463] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.165 [2024-05-15 15:53:21.011578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.165 [2024-05-15 15:53:21.011604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.165 [2024-05-15 15:53:21.011619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.165 [2024-05-15 15:53:21.011633] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.165 [2024-05-15 15:53:21.011661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.165 qpair failed and we were unable to recover it. 00:35:08.165 [2024-05-15 15:53:21.021597] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.165 [2024-05-15 15:53:21.021710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.165 [2024-05-15 15:53:21.021736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.165 [2024-05-15 15:53:21.021750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.165 [2024-05-15 15:53:21.021763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.166 [2024-05-15 15:53:21.021792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.166 qpair failed and we were unable to recover it. 00:35:08.166 [2024-05-15 15:53:21.031520] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.166 [2024-05-15 15:53:21.031656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.166 [2024-05-15 15:53:21.031681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.166 [2024-05-15 15:53:21.031696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.166 [2024-05-15 15:53:21.031709] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.166 [2024-05-15 15:53:21.031737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.166 qpair failed and we were unable to recover it. 00:35:08.166 [2024-05-15 15:53:21.041647] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.166 [2024-05-15 15:53:21.041771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.166 [2024-05-15 15:53:21.041801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.166 [2024-05-15 15:53:21.041817] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.166 [2024-05-15 15:53:21.041831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.166 [2024-05-15 15:53:21.041858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.166 qpair failed and we were unable to recover it. 00:35:08.166 [2024-05-15 15:53:21.051581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.166 [2024-05-15 15:53:21.051693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.166 [2024-05-15 15:53:21.051719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.166 [2024-05-15 15:53:21.051734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.166 [2024-05-15 15:53:21.051747] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.166 [2024-05-15 15:53:21.051775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.166 qpair failed and we were unable to recover it. 00:35:08.166 [2024-05-15 15:53:21.061686] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.166 [2024-05-15 15:53:21.061802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.166 [2024-05-15 15:53:21.061827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.166 [2024-05-15 15:53:21.061842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.166 [2024-05-15 15:53:21.061855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.166 [2024-05-15 15:53:21.061884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.166 qpair failed and we were unable to recover it. 00:35:08.166 [2024-05-15 15:53:21.071622] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.166 [2024-05-15 15:53:21.071740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.166 [2024-05-15 15:53:21.071765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.166 [2024-05-15 15:53:21.071779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.166 [2024-05-15 15:53:21.071793] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.166 [2024-05-15 15:53:21.071821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.166 qpair failed and we were unable to recover it. 00:35:08.166 [2024-05-15 15:53:21.081787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.166 [2024-05-15 15:53:21.081904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.166 [2024-05-15 15:53:21.081930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.166 [2024-05-15 15:53:21.081946] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.166 [2024-05-15 15:53:21.081960] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.166 [2024-05-15 15:53:21.081993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.166 qpair failed and we were unable to recover it. 00:35:08.166 [2024-05-15 15:53:21.091681] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.166 [2024-05-15 15:53:21.091790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.166 [2024-05-15 15:53:21.091815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.166 [2024-05-15 15:53:21.091830] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.166 [2024-05-15 15:53:21.091843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.166 [2024-05-15 15:53:21.091871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.166 qpair failed and we were unable to recover it. 00:35:08.166 [2024-05-15 15:53:21.101714] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.166 [2024-05-15 15:53:21.101850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.166 [2024-05-15 15:53:21.101875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.166 [2024-05-15 15:53:21.101890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.166 [2024-05-15 15:53:21.101904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.166 [2024-05-15 15:53:21.101932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.166 qpair failed and we were unable to recover it. 00:35:08.166 [2024-05-15 15:53:21.111852] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.166 [2024-05-15 15:53:21.112009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.166 [2024-05-15 15:53:21.112034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.166 [2024-05-15 15:53:21.112049] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.166 [2024-05-15 15:53:21.112062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.166 [2024-05-15 15:53:21.112090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.166 qpair failed and we were unable to recover it. 00:35:08.166 [2024-05-15 15:53:21.121799] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.166 [2024-05-15 15:53:21.121935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.166 [2024-05-15 15:53:21.121961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.166 [2024-05-15 15:53:21.121976] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.166 [2024-05-15 15:53:21.121990] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.166 [2024-05-15 15:53:21.122018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.166 qpair failed and we were unable to recover it. 00:35:08.166 [2024-05-15 15:53:21.131816] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.166 [2024-05-15 15:53:21.131945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.166 [2024-05-15 15:53:21.131976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.166 [2024-05-15 15:53:21.131992] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.166 [2024-05-15 15:53:21.132005] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.166 [2024-05-15 15:53:21.132034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.166 qpair failed and we were unable to recover it. 00:35:08.166 [2024-05-15 15:53:21.141886] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.166 [2024-05-15 15:53:21.142019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.166 [2024-05-15 15:53:21.142046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.166 [2024-05-15 15:53:21.142062] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.166 [2024-05-15 15:53:21.142078] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.166 [2024-05-15 15:53:21.142107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.166 qpair failed and we were unable to recover it. 00:35:08.166 [2024-05-15 15:53:21.151975] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.166 [2024-05-15 15:53:21.152097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.166 [2024-05-15 15:53:21.152124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.166 [2024-05-15 15:53:21.152140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.166 [2024-05-15 15:53:21.152153] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.166 [2024-05-15 15:53:21.152181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.166 qpair failed and we were unable to recover it. 00:35:08.166 [2024-05-15 15:53:21.161920] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.166 [2024-05-15 15:53:21.162039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.167 [2024-05-15 15:53:21.162067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.167 [2024-05-15 15:53:21.162082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.167 [2024-05-15 15:53:21.162096] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.167 [2024-05-15 15:53:21.162125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.167 qpair failed and we were unable to recover it. 00:35:08.167 [2024-05-15 15:53:21.171921] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.167 [2024-05-15 15:53:21.172042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.167 [2024-05-15 15:53:21.172068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.167 [2024-05-15 15:53:21.172084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.167 [2024-05-15 15:53:21.172097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.167 [2024-05-15 15:53:21.172130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.167 qpair failed and we were unable to recover it. 00:35:08.167 [2024-05-15 15:53:21.181945] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.167 [2024-05-15 15:53:21.182057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.167 [2024-05-15 15:53:21.182083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.167 [2024-05-15 15:53:21.182097] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.167 [2024-05-15 15:53:21.182111] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.167 [2024-05-15 15:53:21.182139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.167 qpair failed and we were unable to recover it. 00:35:08.167 [2024-05-15 15:53:21.191981] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.167 [2024-05-15 15:53:21.192101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.167 [2024-05-15 15:53:21.192127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.167 [2024-05-15 15:53:21.192142] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.167 [2024-05-15 15:53:21.192155] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.167 [2024-05-15 15:53:21.192183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.167 qpair failed and we were unable to recover it. 00:35:08.167 [2024-05-15 15:53:21.202034] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.167 [2024-05-15 15:53:21.202154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.167 [2024-05-15 15:53:21.202180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.167 [2024-05-15 15:53:21.202195] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.167 [2024-05-15 15:53:21.202208] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.167 [2024-05-15 15:53:21.202244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.167 qpair failed and we were unable to recover it. 00:35:08.167 [2024-05-15 15:53:21.212076] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.167 [2024-05-15 15:53:21.212190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.167 [2024-05-15 15:53:21.212222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.167 [2024-05-15 15:53:21.212239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.167 [2024-05-15 15:53:21.212256] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.167 [2024-05-15 15:53:21.212285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.167 qpair failed and we were unable to recover it. 00:35:08.167 [2024-05-15 15:53:21.222086] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.167 [2024-05-15 15:53:21.222202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.167 [2024-05-15 15:53:21.222241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.167 [2024-05-15 15:53:21.222259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.167 [2024-05-15 15:53:21.222273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.167 [2024-05-15 15:53:21.222301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.167 qpair failed and we were unable to recover it. 00:35:08.167 [2024-05-15 15:53:21.232091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.167 [2024-05-15 15:53:21.232208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.167 [2024-05-15 15:53:21.232240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.167 [2024-05-15 15:53:21.232256] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.167 [2024-05-15 15:53:21.232270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.167 [2024-05-15 15:53:21.232299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.167 qpair failed and we were unable to recover it. 00:35:08.167 [2024-05-15 15:53:21.242202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.167 [2024-05-15 15:53:21.242323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.167 [2024-05-15 15:53:21.242350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.167 [2024-05-15 15:53:21.242365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.167 [2024-05-15 15:53:21.242378] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.167 [2024-05-15 15:53:21.242406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.167 qpair failed and we were unable to recover it. 00:35:08.167 [2024-05-15 15:53:21.252157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.167 [2024-05-15 15:53:21.252274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.167 [2024-05-15 15:53:21.252302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.167 [2024-05-15 15:53:21.252318] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.167 [2024-05-15 15:53:21.252331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.167 [2024-05-15 15:53:21.252359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.167 qpair failed and we were unable to recover it. 00:35:08.167 [2024-05-15 15:53:21.262181] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.167 [2024-05-15 15:53:21.262307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.167 [2024-05-15 15:53:21.262339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.167 [2024-05-15 15:53:21.262353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.167 [2024-05-15 15:53:21.262372] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.167 [2024-05-15 15:53:21.262402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.167 qpair failed and we were unable to recover it. 00:35:08.426 [2024-05-15 15:53:21.272213] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.426 [2024-05-15 15:53:21.272381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.426 [2024-05-15 15:53:21.272409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.426 [2024-05-15 15:53:21.272425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.426 [2024-05-15 15:53:21.272439] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.426 [2024-05-15 15:53:21.272467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.426 qpair failed and we were unable to recover it. 00:35:08.426 [2024-05-15 15:53:21.282244] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.426 [2024-05-15 15:53:21.282360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.426 [2024-05-15 15:53:21.282385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.426 [2024-05-15 15:53:21.282400] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.426 [2024-05-15 15:53:21.282413] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.426 [2024-05-15 15:53:21.282442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.426 qpair failed and we were unable to recover it. 00:35:08.426 [2024-05-15 15:53:21.292370] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.426 [2024-05-15 15:53:21.292482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.426 [2024-05-15 15:53:21.292511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.426 [2024-05-15 15:53:21.292526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.426 [2024-05-15 15:53:21.292540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.426 [2024-05-15 15:53:21.292569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.426 qpair failed and we were unable to recover it. 00:35:08.426 [2024-05-15 15:53:21.302289] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.426 [2024-05-15 15:53:21.302402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.426 [2024-05-15 15:53:21.302427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.426 [2024-05-15 15:53:21.302442] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.426 [2024-05-15 15:53:21.302455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.426 [2024-05-15 15:53:21.302483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.426 qpair failed and we were unable to recover it. 00:35:08.426 [2024-05-15 15:53:21.312348] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.426 [2024-05-15 15:53:21.312467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.426 [2024-05-15 15:53:21.312498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.426 [2024-05-15 15:53:21.312513] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.426 [2024-05-15 15:53:21.312527] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.426 [2024-05-15 15:53:21.312555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.426 qpair failed and we were unable to recover it. 00:35:08.426 [2024-05-15 15:53:21.322433] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.426 [2024-05-15 15:53:21.322575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.426 [2024-05-15 15:53:21.322600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.426 [2024-05-15 15:53:21.322616] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.426 [2024-05-15 15:53:21.322629] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.426 [2024-05-15 15:53:21.322657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.426 qpair failed and we were unable to recover it. 00:35:08.426 [2024-05-15 15:53:21.332385] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.426 [2024-05-15 15:53:21.332494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.426 [2024-05-15 15:53:21.332519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.426 [2024-05-15 15:53:21.332534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.426 [2024-05-15 15:53:21.332548] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.426 [2024-05-15 15:53:21.332576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.426 qpair failed and we were unable to recover it. 00:35:08.426 [2024-05-15 15:53:21.342487] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.426 [2024-05-15 15:53:21.342598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.426 [2024-05-15 15:53:21.342624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.426 [2024-05-15 15:53:21.342640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.426 [2024-05-15 15:53:21.342652] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.427 [2024-05-15 15:53:21.342680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.427 qpair failed and we were unable to recover it. 00:35:08.427 [2024-05-15 15:53:21.352456] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.427 [2024-05-15 15:53:21.352584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.427 [2024-05-15 15:53:21.352609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.427 [2024-05-15 15:53:21.352624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.427 [2024-05-15 15:53:21.352643] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.427 [2024-05-15 15:53:21.352671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.427 qpair failed and we were unable to recover it. 00:35:08.427 [2024-05-15 15:53:21.362474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.427 [2024-05-15 15:53:21.362591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.427 [2024-05-15 15:53:21.362617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.427 [2024-05-15 15:53:21.362631] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.427 [2024-05-15 15:53:21.362644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.427 [2024-05-15 15:53:21.362673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.427 qpair failed and we were unable to recover it. 00:35:08.427 [2024-05-15 15:53:21.372472] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.427 [2024-05-15 15:53:21.372634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.427 [2024-05-15 15:53:21.372659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.427 [2024-05-15 15:53:21.372674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.427 [2024-05-15 15:53:21.372687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.427 [2024-05-15 15:53:21.372718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.427 qpair failed and we were unable to recover it. 00:35:08.427 [2024-05-15 15:53:21.382526] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.427 [2024-05-15 15:53:21.382642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.427 [2024-05-15 15:53:21.382668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.427 [2024-05-15 15:53:21.382684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.427 [2024-05-15 15:53:21.382697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.427 [2024-05-15 15:53:21.382725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.427 qpair failed and we were unable to recover it. 00:35:08.427 [2024-05-15 15:53:21.392686] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.427 [2024-05-15 15:53:21.392819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.427 [2024-05-15 15:53:21.392844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.427 [2024-05-15 15:53:21.392859] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.427 [2024-05-15 15:53:21.392873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.427 [2024-05-15 15:53:21.392901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.427 qpair failed and we were unable to recover it. 00:35:08.427 [2024-05-15 15:53:21.402582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.427 [2024-05-15 15:53:21.402697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.427 [2024-05-15 15:53:21.402723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.427 [2024-05-15 15:53:21.402737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.427 [2024-05-15 15:53:21.402751] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.427 [2024-05-15 15:53:21.402779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.427 qpair failed and we were unable to recover it. 00:35:08.427 [2024-05-15 15:53:21.412602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.427 [2024-05-15 15:53:21.412717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.427 [2024-05-15 15:53:21.412744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.427 [2024-05-15 15:53:21.412760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.427 [2024-05-15 15:53:21.412776] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.427 [2024-05-15 15:53:21.412805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.427 qpair failed and we were unable to recover it. 00:35:08.427 [2024-05-15 15:53:21.422628] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.427 [2024-05-15 15:53:21.422738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.427 [2024-05-15 15:53:21.422763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.427 [2024-05-15 15:53:21.422778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.427 [2024-05-15 15:53:21.422791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.427 [2024-05-15 15:53:21.422820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.427 qpair failed and we were unable to recover it. 00:35:08.427 [2024-05-15 15:53:21.432762] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.427 [2024-05-15 15:53:21.432895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.427 [2024-05-15 15:53:21.432921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.427 [2024-05-15 15:53:21.432936] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.427 [2024-05-15 15:53:21.432949] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.427 [2024-05-15 15:53:21.432978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.427 qpair failed and we were unable to recover it. 00:35:08.427 [2024-05-15 15:53:21.442788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.427 [2024-05-15 15:53:21.442932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.427 [2024-05-15 15:53:21.442958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.427 [2024-05-15 15:53:21.442973] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.427 [2024-05-15 15:53:21.442991] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.427 [2024-05-15 15:53:21.443020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.427 qpair failed and we were unable to recover it. 00:35:08.427 [2024-05-15 15:53:21.452755] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.427 [2024-05-15 15:53:21.452881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.427 [2024-05-15 15:53:21.452906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.427 [2024-05-15 15:53:21.452921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.427 [2024-05-15 15:53:21.452934] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.427 [2024-05-15 15:53:21.452963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.427 qpair failed and we were unable to recover it. 00:35:08.427 [2024-05-15 15:53:21.462766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.427 [2024-05-15 15:53:21.462894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.427 [2024-05-15 15:53:21.462919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.427 [2024-05-15 15:53:21.462934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.427 [2024-05-15 15:53:21.462948] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.428 [2024-05-15 15:53:21.462976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.428 qpair failed and we were unable to recover it. 00:35:08.428 [2024-05-15 15:53:21.472871] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.428 [2024-05-15 15:53:21.473003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.428 [2024-05-15 15:53:21.473028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.428 [2024-05-15 15:53:21.473043] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.428 [2024-05-15 15:53:21.473056] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.428 [2024-05-15 15:53:21.473084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.428 qpair failed and we were unable to recover it. 00:35:08.428 [2024-05-15 15:53:21.482885] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.428 [2024-05-15 15:53:21.483004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.428 [2024-05-15 15:53:21.483030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.428 [2024-05-15 15:53:21.483045] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.428 [2024-05-15 15:53:21.483058] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.428 [2024-05-15 15:53:21.483086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.428 qpair failed and we were unable to recover it. 00:35:08.428 [2024-05-15 15:53:21.492865] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.428 [2024-05-15 15:53:21.492990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.428 [2024-05-15 15:53:21.493015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.428 [2024-05-15 15:53:21.493030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.428 [2024-05-15 15:53:21.493043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.428 [2024-05-15 15:53:21.493071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.428 qpair failed and we were unable to recover it. 00:35:08.428 [2024-05-15 15:53:21.502863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.428 [2024-05-15 15:53:21.502987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.428 [2024-05-15 15:53:21.503013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.428 [2024-05-15 15:53:21.503027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.428 [2024-05-15 15:53:21.503041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.428 [2024-05-15 15:53:21.503068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.428 qpair failed and we were unable to recover it. 00:35:08.428 [2024-05-15 15:53:21.512919] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.428 [2024-05-15 15:53:21.513034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.428 [2024-05-15 15:53:21.513059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.428 [2024-05-15 15:53:21.513074] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.428 [2024-05-15 15:53:21.513087] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.428 [2024-05-15 15:53:21.513116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.428 qpair failed and we were unable to recover it. 00:35:08.428 [2024-05-15 15:53:21.522933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.428 [2024-05-15 15:53:21.523063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.428 [2024-05-15 15:53:21.523089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.428 [2024-05-15 15:53:21.523104] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.428 [2024-05-15 15:53:21.523117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.428 [2024-05-15 15:53:21.523146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.428 qpair failed and we were unable to recover it. 00:35:08.687 [2024-05-15 15:53:21.533047] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.687 [2024-05-15 15:53:21.533163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.687 [2024-05-15 15:53:21.533189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.687 [2024-05-15 15:53:21.533204] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.687 [2024-05-15 15:53:21.533231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.687 [2024-05-15 15:53:21.533261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.687 qpair failed and we were unable to recover it. 00:35:08.687 [2024-05-15 15:53:21.542967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.687 [2024-05-15 15:53:21.543078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.687 [2024-05-15 15:53:21.543104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.687 [2024-05-15 15:53:21.543119] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.687 [2024-05-15 15:53:21.543133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.687 [2024-05-15 15:53:21.543161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.687 qpair failed and we were unable to recover it. 00:35:08.687 [2024-05-15 15:53:21.553106] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.687 [2024-05-15 15:53:21.553242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.687 [2024-05-15 15:53:21.553267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.687 [2024-05-15 15:53:21.553282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.687 [2024-05-15 15:53:21.553295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.687 [2024-05-15 15:53:21.553323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.687 qpair failed and we were unable to recover it. 00:35:08.687 [2024-05-15 15:53:21.563032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.687 [2024-05-15 15:53:21.563149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.687 [2024-05-15 15:53:21.563174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.687 [2024-05-15 15:53:21.563189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.687 [2024-05-15 15:53:21.563202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.687 [2024-05-15 15:53:21.563238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.687 qpair failed and we were unable to recover it. 00:35:08.687 [2024-05-15 15:53:21.573066] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.687 [2024-05-15 15:53:21.573186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.687 [2024-05-15 15:53:21.573211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.687 [2024-05-15 15:53:21.573235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.688 [2024-05-15 15:53:21.573249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.688 [2024-05-15 15:53:21.573277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.688 qpair failed and we were unable to recover it. 00:35:08.688 [2024-05-15 15:53:21.583080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.688 [2024-05-15 15:53:21.583189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.688 [2024-05-15 15:53:21.583224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.688 [2024-05-15 15:53:21.583242] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.688 [2024-05-15 15:53:21.583265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.688 [2024-05-15 15:53:21.583293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.688 qpair failed and we were unable to recover it. 00:35:08.688 [2024-05-15 15:53:21.593137] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.688 [2024-05-15 15:53:21.593265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.688 [2024-05-15 15:53:21.593291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.688 [2024-05-15 15:53:21.593311] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.688 [2024-05-15 15:53:21.593325] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.688 [2024-05-15 15:53:21.593355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.688 qpair failed and we were unable to recover it. 00:35:08.688 [2024-05-15 15:53:21.603261] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.688 [2024-05-15 15:53:21.603397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.688 [2024-05-15 15:53:21.603424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.688 [2024-05-15 15:53:21.603440] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.688 [2024-05-15 15:53:21.603453] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.688 [2024-05-15 15:53:21.603482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.688 qpair failed and we were unable to recover it. 00:35:08.688 [2024-05-15 15:53:21.613222] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.688 [2024-05-15 15:53:21.613341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.688 [2024-05-15 15:53:21.613369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.688 [2024-05-15 15:53:21.613384] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.688 [2024-05-15 15:53:21.613397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.688 [2024-05-15 15:53:21.613425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.688 qpair failed and we were unable to recover it. 00:35:08.688 [2024-05-15 15:53:21.623203] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.688 [2024-05-15 15:53:21.623380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.688 [2024-05-15 15:53:21.623407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.688 [2024-05-15 15:53:21.623430] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.688 [2024-05-15 15:53:21.623445] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.688 [2024-05-15 15:53:21.623473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.688 qpair failed and we were unable to recover it. 00:35:08.688 [2024-05-15 15:53:21.633260] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.688 [2024-05-15 15:53:21.633376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.688 [2024-05-15 15:53:21.633403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.688 [2024-05-15 15:53:21.633418] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.688 [2024-05-15 15:53:21.633431] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.688 [2024-05-15 15:53:21.633461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.688 qpair failed and we were unable to recover it. 00:35:08.688 [2024-05-15 15:53:21.643303] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.688 [2024-05-15 15:53:21.643453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.688 [2024-05-15 15:53:21.643480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.688 [2024-05-15 15:53:21.643498] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.688 [2024-05-15 15:53:21.643512] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.688 [2024-05-15 15:53:21.643541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.688 qpair failed and we were unable to recover it. 00:35:08.688 [2024-05-15 15:53:21.653366] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.688 [2024-05-15 15:53:21.653495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.688 [2024-05-15 15:53:21.653523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.688 [2024-05-15 15:53:21.653538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.688 [2024-05-15 15:53:21.653551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.688 [2024-05-15 15:53:21.653580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.688 qpair failed and we were unable to recover it. 00:35:08.688 [2024-05-15 15:53:21.663306] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.688 [2024-05-15 15:53:21.663416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.688 [2024-05-15 15:53:21.663444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.688 [2024-05-15 15:53:21.663459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.688 [2024-05-15 15:53:21.663472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.688 [2024-05-15 15:53:21.663500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.688 qpair failed and we were unable to recover it. 00:35:08.688 [2024-05-15 15:53:21.673364] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.688 [2024-05-15 15:53:21.673500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.688 [2024-05-15 15:53:21.673528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.688 [2024-05-15 15:53:21.673546] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.688 [2024-05-15 15:53:21.673559] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.688 [2024-05-15 15:53:21.673588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.688 qpair failed and we were unable to recover it. 00:35:08.688 [2024-05-15 15:53:21.683399] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.688 [2024-05-15 15:53:21.683518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.688 [2024-05-15 15:53:21.683545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.688 [2024-05-15 15:53:21.683560] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.688 [2024-05-15 15:53:21.683573] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.688 [2024-05-15 15:53:21.683602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.688 qpair failed and we were unable to recover it. 00:35:08.688 [2024-05-15 15:53:21.693404] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.688 [2024-05-15 15:53:21.693518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.688 [2024-05-15 15:53:21.693544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.688 [2024-05-15 15:53:21.693559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.688 [2024-05-15 15:53:21.693572] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.688 [2024-05-15 15:53:21.693601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.688 qpair failed and we were unable to recover it. 00:35:08.688 [2024-05-15 15:53:21.703434] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.688 [2024-05-15 15:53:21.703545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.688 [2024-05-15 15:53:21.703570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.688 [2024-05-15 15:53:21.703585] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.688 [2024-05-15 15:53:21.703598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.688 [2024-05-15 15:53:21.703626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.688 qpair failed and we were unable to recover it. 00:35:08.688 [2024-05-15 15:53:21.713459] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.688 [2024-05-15 15:53:21.713575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.688 [2024-05-15 15:53:21.713602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.689 [2024-05-15 15:53:21.713622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.689 [2024-05-15 15:53:21.713636] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.689 [2024-05-15 15:53:21.713665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.689 qpair failed and we were unable to recover it. 00:35:08.689 [2024-05-15 15:53:21.723507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.689 [2024-05-15 15:53:21.723656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.689 [2024-05-15 15:53:21.723683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.689 [2024-05-15 15:53:21.723698] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.689 [2024-05-15 15:53:21.723711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.689 [2024-05-15 15:53:21.723740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.689 qpair failed and we were unable to recover it. 00:35:08.689 [2024-05-15 15:53:21.733504] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.689 [2024-05-15 15:53:21.733613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.689 [2024-05-15 15:53:21.733641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.689 [2024-05-15 15:53:21.733656] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.689 [2024-05-15 15:53:21.733669] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.689 [2024-05-15 15:53:21.733697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.689 qpair failed and we were unable to recover it. 00:35:08.689 [2024-05-15 15:53:21.743628] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.689 [2024-05-15 15:53:21.743771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.689 [2024-05-15 15:53:21.743796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.689 [2024-05-15 15:53:21.743811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.689 [2024-05-15 15:53:21.743824] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.689 [2024-05-15 15:53:21.743852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.689 qpair failed and we were unable to recover it. 00:35:08.689 [2024-05-15 15:53:21.753570] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.689 [2024-05-15 15:53:21.753700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.689 [2024-05-15 15:53:21.753725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.689 [2024-05-15 15:53:21.753741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.689 [2024-05-15 15:53:21.753754] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.689 [2024-05-15 15:53:21.753781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.689 qpair failed and we were unable to recover it. 00:35:08.689 [2024-05-15 15:53:21.763601] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.689 [2024-05-15 15:53:21.763719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.689 [2024-05-15 15:53:21.763745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.689 [2024-05-15 15:53:21.763760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.689 [2024-05-15 15:53:21.763776] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.689 [2024-05-15 15:53:21.763804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.689 qpair failed and we were unable to recover it. 00:35:08.689 [2024-05-15 15:53:21.773643] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.689 [2024-05-15 15:53:21.773800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.689 [2024-05-15 15:53:21.773826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.689 [2024-05-15 15:53:21.773841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.689 [2024-05-15 15:53:21.773854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.689 [2024-05-15 15:53:21.773883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.689 qpair failed and we were unable to recover it. 00:35:08.689 [2024-05-15 15:53:21.783644] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.689 [2024-05-15 15:53:21.783770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.689 [2024-05-15 15:53:21.783795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.689 [2024-05-15 15:53:21.783811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.689 [2024-05-15 15:53:21.783824] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.689 [2024-05-15 15:53:21.783853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.689 qpair failed and we were unable to recover it. 00:35:08.947 [2024-05-15 15:53:21.793679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.947 [2024-05-15 15:53:21.793799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.947 [2024-05-15 15:53:21.793825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.947 [2024-05-15 15:53:21.793840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.947 [2024-05-15 15:53:21.793854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.947 [2024-05-15 15:53:21.793882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.947 qpair failed and we were unable to recover it. 00:35:08.947 [2024-05-15 15:53:21.803761] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.947 [2024-05-15 15:53:21.803911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.947 [2024-05-15 15:53:21.803937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.947 [2024-05-15 15:53:21.803957] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.947 [2024-05-15 15:53:21.803971] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.947 [2024-05-15 15:53:21.804000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.947 qpair failed and we were unable to recover it. 00:35:08.947 [2024-05-15 15:53:21.813725] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.947 [2024-05-15 15:53:21.813852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.947 [2024-05-15 15:53:21.813878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.947 [2024-05-15 15:53:21.813893] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.947 [2024-05-15 15:53:21.813907] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.947 [2024-05-15 15:53:21.813935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.947 qpair failed and we were unable to recover it. 00:35:08.947 [2024-05-15 15:53:21.823756] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.947 [2024-05-15 15:53:21.823865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.947 [2024-05-15 15:53:21.823890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.947 [2024-05-15 15:53:21.823905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.947 [2024-05-15 15:53:21.823918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.947 [2024-05-15 15:53:21.823947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.947 qpair failed and we were unable to recover it. 00:35:08.947 [2024-05-15 15:53:21.833847] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.947 [2024-05-15 15:53:21.834006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.947 [2024-05-15 15:53:21.834031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.947 [2024-05-15 15:53:21.834057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.947 [2024-05-15 15:53:21.834070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.947 [2024-05-15 15:53:21.834098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.947 qpair failed and we were unable to recover it. 00:35:08.947 [2024-05-15 15:53:21.843815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.947 [2024-05-15 15:53:21.843930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.947 [2024-05-15 15:53:21.843955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.947 [2024-05-15 15:53:21.843970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.947 [2024-05-15 15:53:21.843984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.947 [2024-05-15 15:53:21.844011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.947 qpair failed and we were unable to recover it. 00:35:08.947 [2024-05-15 15:53:21.853837] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.947 [2024-05-15 15:53:21.853954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.947 [2024-05-15 15:53:21.853979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.947 [2024-05-15 15:53:21.853994] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.947 [2024-05-15 15:53:21.854008] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.947 [2024-05-15 15:53:21.854036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.947 qpair failed and we were unable to recover it. 00:35:08.947 [2024-05-15 15:53:21.863910] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.947 [2024-05-15 15:53:21.864031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.947 [2024-05-15 15:53:21.864056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.947 [2024-05-15 15:53:21.864071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.947 [2024-05-15 15:53:21.864084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.947 [2024-05-15 15:53:21.864112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.947 qpair failed and we were unable to recover it. 00:35:08.947 [2024-05-15 15:53:21.873910] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.947 [2024-05-15 15:53:21.874026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.947 [2024-05-15 15:53:21.874053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.947 [2024-05-15 15:53:21.874068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.947 [2024-05-15 15:53:21.874082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.947 [2024-05-15 15:53:21.874109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.947 qpair failed and we were unable to recover it. 00:35:08.947 [2024-05-15 15:53:21.883922] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.947 [2024-05-15 15:53:21.884041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.947 [2024-05-15 15:53:21.884068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.947 [2024-05-15 15:53:21.884083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.947 [2024-05-15 15:53:21.884096] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.947 [2024-05-15 15:53:21.884124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.947 qpair failed and we were unable to recover it. 00:35:08.947 [2024-05-15 15:53:21.893953] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.947 [2024-05-15 15:53:21.894067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.947 [2024-05-15 15:53:21.894099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.947 [2024-05-15 15:53:21.894115] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.947 [2024-05-15 15:53:21.894128] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.947 [2024-05-15 15:53:21.894157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.947 qpair failed and we were unable to recover it. 00:35:08.947 [2024-05-15 15:53:21.903989] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.947 [2024-05-15 15:53:21.904110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.947 [2024-05-15 15:53:21.904138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.948 [2024-05-15 15:53:21.904153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.948 [2024-05-15 15:53:21.904165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.948 [2024-05-15 15:53:21.904194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.948 qpair failed and we were unable to recover it. 00:35:08.948 [2024-05-15 15:53:21.914102] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.948 [2024-05-15 15:53:21.914231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.948 [2024-05-15 15:53:21.914258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.948 [2024-05-15 15:53:21.914273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.948 [2024-05-15 15:53:21.914286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.948 [2024-05-15 15:53:21.914315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.948 qpair failed and we were unable to recover it. 00:35:08.948 [2024-05-15 15:53:21.924048] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.948 [2024-05-15 15:53:21.924170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.948 [2024-05-15 15:53:21.924196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.948 [2024-05-15 15:53:21.924212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.948 [2024-05-15 15:53:21.924232] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.948 [2024-05-15 15:53:21.924262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.948 qpair failed and we were unable to recover it. 00:35:08.948 [2024-05-15 15:53:21.934075] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.948 [2024-05-15 15:53:21.934190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.948 [2024-05-15 15:53:21.934222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.948 [2024-05-15 15:53:21.934239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.948 [2024-05-15 15:53:21.934253] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.948 [2024-05-15 15:53:21.934281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.948 qpair failed and we were unable to recover it. 00:35:08.948 [2024-05-15 15:53:21.944094] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.948 [2024-05-15 15:53:21.944204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.948 [2024-05-15 15:53:21.944240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.948 [2024-05-15 15:53:21.944256] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.948 [2024-05-15 15:53:21.944269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.948 [2024-05-15 15:53:21.944297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.948 qpair failed and we were unable to recover it. 00:35:08.948 [2024-05-15 15:53:21.954137] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.948 [2024-05-15 15:53:21.954259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.948 [2024-05-15 15:53:21.954285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.948 [2024-05-15 15:53:21.954300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.948 [2024-05-15 15:53:21.954314] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.948 [2024-05-15 15:53:21.954342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.948 qpair failed and we were unable to recover it. 00:35:08.948 [2024-05-15 15:53:21.964194] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.948 [2024-05-15 15:53:21.964371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.948 [2024-05-15 15:53:21.964398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.948 [2024-05-15 15:53:21.964413] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.948 [2024-05-15 15:53:21.964425] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.948 [2024-05-15 15:53:21.964454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.948 qpair failed and we were unable to recover it. 00:35:08.948 [2024-05-15 15:53:21.974298] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.948 [2024-05-15 15:53:21.974427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.948 [2024-05-15 15:53:21.974454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.948 [2024-05-15 15:53:21.974469] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.948 [2024-05-15 15:53:21.974482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.948 [2024-05-15 15:53:21.974510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.948 qpair failed and we were unable to recover it. 00:35:08.948 [2024-05-15 15:53:21.984205] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.948 [2024-05-15 15:53:21.984326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.948 [2024-05-15 15:53:21.984358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.948 [2024-05-15 15:53:21.984374] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.948 [2024-05-15 15:53:21.984387] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.948 [2024-05-15 15:53:21.984416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.948 qpair failed and we were unable to recover it. 00:35:08.948 [2024-05-15 15:53:21.994270] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.948 [2024-05-15 15:53:21.994402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.948 [2024-05-15 15:53:21.994429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.948 [2024-05-15 15:53:21.994444] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.948 [2024-05-15 15:53:21.994457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.948 [2024-05-15 15:53:21.994485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.948 qpair failed and we were unable to recover it. 00:35:08.948 [2024-05-15 15:53:22.004295] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.948 [2024-05-15 15:53:22.004419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.948 [2024-05-15 15:53:22.004447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.948 [2024-05-15 15:53:22.004465] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.948 [2024-05-15 15:53:22.004479] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.948 [2024-05-15 15:53:22.004509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.948 qpair failed and we were unable to recover it. 00:35:08.948 [2024-05-15 15:53:22.014309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.948 [2024-05-15 15:53:22.014430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.948 [2024-05-15 15:53:22.014457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.948 [2024-05-15 15:53:22.014473] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.948 [2024-05-15 15:53:22.014486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.948 [2024-05-15 15:53:22.014515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.948 qpair failed and we were unable to recover it. 00:35:08.948 [2024-05-15 15:53:22.024355] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.948 [2024-05-15 15:53:22.024472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.948 [2024-05-15 15:53:22.024499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.948 [2024-05-15 15:53:22.024514] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.948 [2024-05-15 15:53:22.024527] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.948 [2024-05-15 15:53:22.024561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.948 qpair failed and we were unable to recover it. 00:35:08.948 [2024-05-15 15:53:22.034366] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.948 [2024-05-15 15:53:22.034511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.948 [2024-05-15 15:53:22.034539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.948 [2024-05-15 15:53:22.034554] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.948 [2024-05-15 15:53:22.034567] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.948 [2024-05-15 15:53:22.034595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.948 qpair failed and we were unable to recover it. 00:35:08.948 [2024-05-15 15:53:22.044402] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:08.948 [2024-05-15 15:53:22.044521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:08.948 [2024-05-15 15:53:22.044548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:08.949 [2024-05-15 15:53:22.044563] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:08.949 [2024-05-15 15:53:22.044576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:08.949 [2024-05-15 15:53:22.044609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:08.949 qpair failed and we were unable to recover it. 00:35:09.207 [2024-05-15 15:53:22.054501] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.207 [2024-05-15 15:53:22.054654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.207 [2024-05-15 15:53:22.054682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.207 [2024-05-15 15:53:22.054698] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.207 [2024-05-15 15:53:22.054711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.207 [2024-05-15 15:53:22.054740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.207 qpair failed and we were unable to recover it. 00:35:09.207 [2024-05-15 15:53:22.064439] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.207 [2024-05-15 15:53:22.064545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.207 [2024-05-15 15:53:22.064572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.207 [2024-05-15 15:53:22.064587] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.207 [2024-05-15 15:53:22.064600] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.207 [2024-05-15 15:53:22.064629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.207 qpair failed and we were unable to recover it. 00:35:09.207 [2024-05-15 15:53:22.074483] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.207 [2024-05-15 15:53:22.074603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.207 [2024-05-15 15:53:22.074634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.207 [2024-05-15 15:53:22.074650] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.207 [2024-05-15 15:53:22.074663] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.207 [2024-05-15 15:53:22.074691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.207 qpair failed and we were unable to recover it. 00:35:09.207 [2024-05-15 15:53:22.084517] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.207 [2024-05-15 15:53:22.084636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.207 [2024-05-15 15:53:22.084661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.207 [2024-05-15 15:53:22.084677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.207 [2024-05-15 15:53:22.084690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.207 [2024-05-15 15:53:22.084719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.207 qpair failed and we were unable to recover it. 00:35:09.207 [2024-05-15 15:53:22.094563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.207 [2024-05-15 15:53:22.094695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.207 [2024-05-15 15:53:22.094723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.207 [2024-05-15 15:53:22.094738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.207 [2024-05-15 15:53:22.094751] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.207 [2024-05-15 15:53:22.094780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.207 qpair failed and we were unable to recover it. 00:35:09.207 [2024-05-15 15:53:22.104558] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.207 [2024-05-15 15:53:22.104672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.207 [2024-05-15 15:53:22.104699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.208 [2024-05-15 15:53:22.104714] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.208 [2024-05-15 15:53:22.104727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.208 [2024-05-15 15:53:22.104755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.208 qpair failed and we were unable to recover it. 00:35:09.208 [2024-05-15 15:53:22.114614] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.208 [2024-05-15 15:53:22.114785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.208 [2024-05-15 15:53:22.114812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.208 [2024-05-15 15:53:22.114828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.208 [2024-05-15 15:53:22.114840] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.208 [2024-05-15 15:53:22.114875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.208 qpair failed and we were unable to recover it. 00:35:09.208 [2024-05-15 15:53:22.124635] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.208 [2024-05-15 15:53:22.124753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.208 [2024-05-15 15:53:22.124779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.208 [2024-05-15 15:53:22.124795] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.208 [2024-05-15 15:53:22.124808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.208 [2024-05-15 15:53:22.124836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.208 qpair failed and we were unable to recover it. 00:35:09.208 [2024-05-15 15:53:22.134683] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.208 [2024-05-15 15:53:22.134800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.208 [2024-05-15 15:53:22.134827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.208 [2024-05-15 15:53:22.134843] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.208 [2024-05-15 15:53:22.134856] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.208 [2024-05-15 15:53:22.134884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.208 qpair failed and we were unable to recover it. 00:35:09.208 [2024-05-15 15:53:22.144672] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.208 [2024-05-15 15:53:22.144783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.208 [2024-05-15 15:53:22.144810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.208 [2024-05-15 15:53:22.144826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.208 [2024-05-15 15:53:22.144839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.208 [2024-05-15 15:53:22.144867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.208 qpair failed and we were unable to recover it. 00:35:09.208 [2024-05-15 15:53:22.154742] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.208 [2024-05-15 15:53:22.154860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.208 [2024-05-15 15:53:22.154887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.208 [2024-05-15 15:53:22.154902] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.208 [2024-05-15 15:53:22.154915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.208 [2024-05-15 15:53:22.154943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.208 qpair failed and we were unable to recover it. 00:35:09.208 [2024-05-15 15:53:22.164836] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.208 [2024-05-15 15:53:22.164974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.208 [2024-05-15 15:53:22.165005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.208 [2024-05-15 15:53:22.165021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.208 [2024-05-15 15:53:22.165034] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.208 [2024-05-15 15:53:22.165063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.208 qpair failed and we were unable to recover it. 00:35:09.208 [2024-05-15 15:53:22.174778] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.208 [2024-05-15 15:53:22.174891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.208 [2024-05-15 15:53:22.174918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.208 [2024-05-15 15:53:22.174933] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.208 [2024-05-15 15:53:22.174946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.208 [2024-05-15 15:53:22.174975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.208 qpair failed and we were unable to recover it. 00:35:09.208 [2024-05-15 15:53:22.184792] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.208 [2024-05-15 15:53:22.184903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.208 [2024-05-15 15:53:22.184929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.208 [2024-05-15 15:53:22.184945] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.208 [2024-05-15 15:53:22.184958] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.208 [2024-05-15 15:53:22.184986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.208 qpair failed and we were unable to recover it. 00:35:09.208 [2024-05-15 15:53:22.194840] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.208 [2024-05-15 15:53:22.194959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.208 [2024-05-15 15:53:22.194986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.208 [2024-05-15 15:53:22.195001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.208 [2024-05-15 15:53:22.195013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.208 [2024-05-15 15:53:22.195041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.208 qpair failed and we were unable to recover it. 00:35:09.208 [2024-05-15 15:53:22.204875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.208 [2024-05-15 15:53:22.205030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.208 [2024-05-15 15:53:22.205057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.208 [2024-05-15 15:53:22.205072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.208 [2024-05-15 15:53:22.205085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.208 [2024-05-15 15:53:22.205118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.208 qpair failed and we were unable to recover it. 00:35:09.208 [2024-05-15 15:53:22.214873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.208 [2024-05-15 15:53:22.214986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.208 [2024-05-15 15:53:22.215013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.208 [2024-05-15 15:53:22.215028] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.208 [2024-05-15 15:53:22.215040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.208 [2024-05-15 15:53:22.215069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.208 qpair failed and we were unable to recover it. 00:35:09.208 [2024-05-15 15:53:22.224895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.208 [2024-05-15 15:53:22.225046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.208 [2024-05-15 15:53:22.225073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.208 [2024-05-15 15:53:22.225088] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.208 [2024-05-15 15:53:22.225101] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.208 [2024-05-15 15:53:22.225129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.208 qpair failed and we were unable to recover it. 00:35:09.208 [2024-05-15 15:53:22.234972] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.208 [2024-05-15 15:53:22.235148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.208 [2024-05-15 15:53:22.235174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.208 [2024-05-15 15:53:22.235189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.208 [2024-05-15 15:53:22.235202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.209 [2024-05-15 15:53:22.235236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.209 qpair failed and we were unable to recover it. 00:35:09.209 [2024-05-15 15:53:22.244962] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.209 [2024-05-15 15:53:22.245077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.209 [2024-05-15 15:53:22.245104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.209 [2024-05-15 15:53:22.245119] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.209 [2024-05-15 15:53:22.245131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.209 [2024-05-15 15:53:22.245160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.209 qpair failed and we were unable to recover it. 00:35:09.209 [2024-05-15 15:53:22.255027] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.209 [2024-05-15 15:53:22.255141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.209 [2024-05-15 15:53:22.255172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.209 [2024-05-15 15:53:22.255188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.209 [2024-05-15 15:53:22.255201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.209 [2024-05-15 15:53:22.255236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.209 qpair failed and we were unable to recover it. 00:35:09.209 [2024-05-15 15:53:22.265010] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.209 [2024-05-15 15:53:22.265122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.209 [2024-05-15 15:53:22.265149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.209 [2024-05-15 15:53:22.265164] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.209 [2024-05-15 15:53:22.265177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.209 [2024-05-15 15:53:22.265204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.209 qpair failed and we were unable to recover it. 00:35:09.209 [2024-05-15 15:53:22.275060] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.209 [2024-05-15 15:53:22.275184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.209 [2024-05-15 15:53:22.275212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.209 [2024-05-15 15:53:22.275235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.209 [2024-05-15 15:53:22.275249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.209 [2024-05-15 15:53:22.275277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.209 qpair failed and we were unable to recover it. 00:35:09.209 [2024-05-15 15:53:22.285094] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.209 [2024-05-15 15:53:22.285221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.209 [2024-05-15 15:53:22.285249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.209 [2024-05-15 15:53:22.285264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.209 [2024-05-15 15:53:22.285277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.209 [2024-05-15 15:53:22.285305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.209 qpair failed and we were unable to recover it. 00:35:09.209 [2024-05-15 15:53:22.295120] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.209 [2024-05-15 15:53:22.295239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.209 [2024-05-15 15:53:22.295267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.209 [2024-05-15 15:53:22.295285] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.209 [2024-05-15 15:53:22.295303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.209 [2024-05-15 15:53:22.295332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.209 qpair failed and we were unable to recover it. 00:35:09.209 [2024-05-15 15:53:22.305129] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.209 [2024-05-15 15:53:22.305246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.209 [2024-05-15 15:53:22.305272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.209 [2024-05-15 15:53:22.305287] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.209 [2024-05-15 15:53:22.305300] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.209 [2024-05-15 15:53:22.305329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.209 qpair failed and we were unable to recover it. 00:35:09.468 [2024-05-15 15:53:22.315178] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.468 [2024-05-15 15:53:22.315304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.468 [2024-05-15 15:53:22.315332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.468 [2024-05-15 15:53:22.315347] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.468 [2024-05-15 15:53:22.315360] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.468 [2024-05-15 15:53:22.315389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.468 qpair failed and we were unable to recover it. 00:35:09.468 [2024-05-15 15:53:22.325185] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.468 [2024-05-15 15:53:22.325307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.468 [2024-05-15 15:53:22.325334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.468 [2024-05-15 15:53:22.325350] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.468 [2024-05-15 15:53:22.325363] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.468 [2024-05-15 15:53:22.325392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.468 qpair failed and we were unable to recover it. 00:35:09.468 [2024-05-15 15:53:22.335272] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.468 [2024-05-15 15:53:22.335429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.468 [2024-05-15 15:53:22.335461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.468 [2024-05-15 15:53:22.335477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.468 [2024-05-15 15:53:22.335490] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.468 [2024-05-15 15:53:22.335519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.468 qpair failed and we were unable to recover it. 00:35:09.468 [2024-05-15 15:53:22.345249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.468 [2024-05-15 15:53:22.345372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.468 [2024-05-15 15:53:22.345399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.468 [2024-05-15 15:53:22.345415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.468 [2024-05-15 15:53:22.345427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.468 [2024-05-15 15:53:22.345456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.468 qpair failed and we were unable to recover it. 00:35:09.468 [2024-05-15 15:53:22.355276] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.468 [2024-05-15 15:53:22.355403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.468 [2024-05-15 15:53:22.355429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.468 [2024-05-15 15:53:22.355444] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.468 [2024-05-15 15:53:22.355457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.468 [2024-05-15 15:53:22.355486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.468 qpair failed and we were unable to recover it. 00:35:09.468 [2024-05-15 15:53:22.365303] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.468 [2024-05-15 15:53:22.365426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.468 [2024-05-15 15:53:22.365452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.468 [2024-05-15 15:53:22.365468] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.468 [2024-05-15 15:53:22.365481] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.468 [2024-05-15 15:53:22.365509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.468 qpair failed and we were unable to recover it. 00:35:09.468 [2024-05-15 15:53:22.375330] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.469 [2024-05-15 15:53:22.375463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.469 [2024-05-15 15:53:22.375501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.469 [2024-05-15 15:53:22.375516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.469 [2024-05-15 15:53:22.375530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.469 [2024-05-15 15:53:22.375558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.469 qpair failed and we were unable to recover it. 00:35:09.469 [2024-05-15 15:53:22.385366] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.469 [2024-05-15 15:53:22.385527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.469 [2024-05-15 15:53:22.385554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.469 [2024-05-15 15:53:22.385569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.469 [2024-05-15 15:53:22.385590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.469 [2024-05-15 15:53:22.385619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.469 qpair failed and we were unable to recover it. 00:35:09.469 [2024-05-15 15:53:22.395436] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.469 [2024-05-15 15:53:22.395614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.469 [2024-05-15 15:53:22.395641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.469 [2024-05-15 15:53:22.395657] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.469 [2024-05-15 15:53:22.395669] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.469 [2024-05-15 15:53:22.395697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.469 qpair failed and we were unable to recover it. 00:35:09.469 [2024-05-15 15:53:22.405430] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.469 [2024-05-15 15:53:22.405597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.469 [2024-05-15 15:53:22.405623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.469 [2024-05-15 15:53:22.405638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.469 [2024-05-15 15:53:22.405651] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.469 [2024-05-15 15:53:22.405680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.469 qpair failed and we were unable to recover it. 00:35:09.469 [2024-05-15 15:53:22.415467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.469 [2024-05-15 15:53:22.415594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.469 [2024-05-15 15:53:22.415620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.469 [2024-05-15 15:53:22.415642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.469 [2024-05-15 15:53:22.415655] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.469 [2024-05-15 15:53:22.415683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.469 qpair failed and we were unable to recover it. 00:35:09.469 [2024-05-15 15:53:22.425505] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.469 [2024-05-15 15:53:22.425645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.469 [2024-05-15 15:53:22.425671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.469 [2024-05-15 15:53:22.425686] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.469 [2024-05-15 15:53:22.425699] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.469 [2024-05-15 15:53:22.425727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.469 qpair failed and we were unable to recover it. 00:35:09.469 [2024-05-15 15:53:22.435582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.469 [2024-05-15 15:53:22.435713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.469 [2024-05-15 15:53:22.435739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.469 [2024-05-15 15:53:22.435754] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.469 [2024-05-15 15:53:22.435767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.469 [2024-05-15 15:53:22.435796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.469 qpair failed and we were unable to recover it. 00:35:09.469 [2024-05-15 15:53:22.445547] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.469 [2024-05-15 15:53:22.445675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.469 [2024-05-15 15:53:22.445702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.469 [2024-05-15 15:53:22.445717] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.469 [2024-05-15 15:53:22.445730] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.469 [2024-05-15 15:53:22.445758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.469 qpair failed and we were unable to recover it. 00:35:09.469 [2024-05-15 15:53:22.455580] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.469 [2024-05-15 15:53:22.455700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.469 [2024-05-15 15:53:22.455727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.469 [2024-05-15 15:53:22.455742] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.469 [2024-05-15 15:53:22.455754] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.469 [2024-05-15 15:53:22.455782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.469 qpair failed and we were unable to recover it. 00:35:09.469 [2024-05-15 15:53:22.465611] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.469 [2024-05-15 15:53:22.465734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.469 [2024-05-15 15:53:22.465760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.469 [2024-05-15 15:53:22.465776] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.469 [2024-05-15 15:53:22.465788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.469 [2024-05-15 15:53:22.465816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.469 qpair failed and we were unable to recover it. 00:35:09.469 [2024-05-15 15:53:22.475617] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.469 [2024-05-15 15:53:22.475757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.469 [2024-05-15 15:53:22.475783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.469 [2024-05-15 15:53:22.475798] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.469 [2024-05-15 15:53:22.475816] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.469 [2024-05-15 15:53:22.475845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.469 qpair failed and we were unable to recover it. 00:35:09.469 [2024-05-15 15:53:22.485638] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.469 [2024-05-15 15:53:22.485756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.469 [2024-05-15 15:53:22.485783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.469 [2024-05-15 15:53:22.485798] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.469 [2024-05-15 15:53:22.485811] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.469 [2024-05-15 15:53:22.485839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.469 qpair failed and we were unable to recover it. 00:35:09.469 [2024-05-15 15:53:22.495673] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.469 [2024-05-15 15:53:22.495818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.469 [2024-05-15 15:53:22.495844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.469 [2024-05-15 15:53:22.495859] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.470 [2024-05-15 15:53:22.495872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.470 [2024-05-15 15:53:22.495900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.470 qpair failed and we were unable to recover it. 00:35:09.470 [2024-05-15 15:53:22.505686] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.470 [2024-05-15 15:53:22.505809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.470 [2024-05-15 15:53:22.505836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.470 [2024-05-15 15:53:22.505851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.470 [2024-05-15 15:53:22.505864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.470 [2024-05-15 15:53:22.505892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.470 qpair failed and we were unable to recover it. 00:35:09.470 [2024-05-15 15:53:22.515740] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.470 [2024-05-15 15:53:22.515913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.470 [2024-05-15 15:53:22.515939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.470 [2024-05-15 15:53:22.515954] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.470 [2024-05-15 15:53:22.515967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.470 [2024-05-15 15:53:22.515995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.470 qpair failed and we were unable to recover it. 00:35:09.470 [2024-05-15 15:53:22.525755] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.470 [2024-05-15 15:53:22.525908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.470 [2024-05-15 15:53:22.525935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.470 [2024-05-15 15:53:22.525950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.470 [2024-05-15 15:53:22.525963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.470 [2024-05-15 15:53:22.525991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.470 qpair failed and we were unable to recover it. 00:35:09.470 [2024-05-15 15:53:22.535933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.470 [2024-05-15 15:53:22.536062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.470 [2024-05-15 15:53:22.536090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.470 [2024-05-15 15:53:22.536105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.470 [2024-05-15 15:53:22.536118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.470 [2024-05-15 15:53:22.536146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.470 qpair failed and we were unable to recover it. 00:35:09.470 [2024-05-15 15:53:22.545817] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.470 [2024-05-15 15:53:22.545936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.470 [2024-05-15 15:53:22.545963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.470 [2024-05-15 15:53:22.545979] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.470 [2024-05-15 15:53:22.545991] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.470 [2024-05-15 15:53:22.546019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.470 qpair failed and we were unable to recover it. 00:35:09.470 [2024-05-15 15:53:22.555876] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.470 [2024-05-15 15:53:22.555998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.470 [2024-05-15 15:53:22.556025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.470 [2024-05-15 15:53:22.556040] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.470 [2024-05-15 15:53:22.556053] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.470 [2024-05-15 15:53:22.556081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.470 qpair failed and we were unable to recover it. 00:35:09.470 [2024-05-15 15:53:22.565879] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.470 [2024-05-15 15:53:22.566010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.470 [2024-05-15 15:53:22.566037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.470 [2024-05-15 15:53:22.566057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.470 [2024-05-15 15:53:22.566070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.470 [2024-05-15 15:53:22.566099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.470 qpair failed and we were unable to recover it. 00:35:09.729 [2024-05-15 15:53:22.576007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.729 [2024-05-15 15:53:22.576124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.729 [2024-05-15 15:53:22.576152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.729 [2024-05-15 15:53:22.576167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.729 [2024-05-15 15:53:22.576180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.729 [2024-05-15 15:53:22.576208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.729 qpair failed and we were unable to recover it. 00:35:09.729 [2024-05-15 15:53:22.585959] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.729 [2024-05-15 15:53:22.586085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.729 [2024-05-15 15:53:22.586110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.729 [2024-05-15 15:53:22.586125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.729 [2024-05-15 15:53:22.586138] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.729 [2024-05-15 15:53:22.586172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.729 qpair failed and we were unable to recover it. 00:35:09.729 [2024-05-15 15:53:22.595965] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.729 [2024-05-15 15:53:22.596093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.729 [2024-05-15 15:53:22.596120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.729 [2024-05-15 15:53:22.596135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.729 [2024-05-15 15:53:22.596148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.730 [2024-05-15 15:53:22.596178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.730 qpair failed and we were unable to recover it. 00:35:09.730 [2024-05-15 15:53:22.606014] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.730 [2024-05-15 15:53:22.606146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.730 [2024-05-15 15:53:22.606173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.730 [2024-05-15 15:53:22.606188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.730 [2024-05-15 15:53:22.606206] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.730 [2024-05-15 15:53:22.606242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.730 qpair failed and we were unable to recover it. 00:35:09.730 [2024-05-15 15:53:22.616054] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.730 [2024-05-15 15:53:22.616175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.730 [2024-05-15 15:53:22.616202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.730 [2024-05-15 15:53:22.616225] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.730 [2024-05-15 15:53:22.616240] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.730 [2024-05-15 15:53:22.616269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.730 qpair failed and we were unable to recover it. 00:35:09.730 [2024-05-15 15:53:22.626041] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.730 [2024-05-15 15:53:22.626159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.730 [2024-05-15 15:53:22.626186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.730 [2024-05-15 15:53:22.626201] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.730 [2024-05-15 15:53:22.626213] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.730 [2024-05-15 15:53:22.626250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.730 qpair failed and we were unable to recover it. 00:35:09.730 [2024-05-15 15:53:22.636091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.730 [2024-05-15 15:53:22.636231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.730 [2024-05-15 15:53:22.636258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.730 [2024-05-15 15:53:22.636273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.730 [2024-05-15 15:53:22.636286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.730 [2024-05-15 15:53:22.636314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.730 qpair failed and we were unable to recover it. 00:35:09.730 [2024-05-15 15:53:22.646137] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.730 [2024-05-15 15:53:22.646278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.730 [2024-05-15 15:53:22.646305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.730 [2024-05-15 15:53:22.646320] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.730 [2024-05-15 15:53:22.646333] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.730 [2024-05-15 15:53:22.646361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.730 qpair failed and we were unable to recover it. 00:35:09.730 [2024-05-15 15:53:22.656165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.730 [2024-05-15 15:53:22.656285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.730 [2024-05-15 15:53:22.656311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.730 [2024-05-15 15:53:22.656332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.730 [2024-05-15 15:53:22.656346] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.730 [2024-05-15 15:53:22.656374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.730 qpair failed and we were unable to recover it. 00:35:09.730 [2024-05-15 15:53:22.666257] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.730 [2024-05-15 15:53:22.666371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.730 [2024-05-15 15:53:22.666397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.730 [2024-05-15 15:53:22.666412] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.730 [2024-05-15 15:53:22.666424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.730 [2024-05-15 15:53:22.666453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.730 qpair failed and we were unable to recover it. 00:35:09.730 [2024-05-15 15:53:22.676213] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.730 [2024-05-15 15:53:22.676377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.730 [2024-05-15 15:53:22.676403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.730 [2024-05-15 15:53:22.676418] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.730 [2024-05-15 15:53:22.676430] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.730 [2024-05-15 15:53:22.676459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.730 qpair failed and we were unable to recover it. 00:35:09.730 [2024-05-15 15:53:22.686237] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.730 [2024-05-15 15:53:22.686365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.730 [2024-05-15 15:53:22.686392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.730 [2024-05-15 15:53:22.686407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.730 [2024-05-15 15:53:22.686423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.730 [2024-05-15 15:53:22.686452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.730 qpair failed and we were unable to recover it. 00:35:09.730 [2024-05-15 15:53:22.696262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.730 [2024-05-15 15:53:22.696380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.730 [2024-05-15 15:53:22.696407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.730 [2024-05-15 15:53:22.696423] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.730 [2024-05-15 15:53:22.696435] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.730 [2024-05-15 15:53:22.696463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.730 qpair failed and we were unable to recover it. 00:35:09.730 [2024-05-15 15:53:22.706303] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.730 [2024-05-15 15:53:22.706435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.730 [2024-05-15 15:53:22.706462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.730 [2024-05-15 15:53:22.706481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.730 [2024-05-15 15:53:22.706494] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.730 [2024-05-15 15:53:22.706523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.731 qpair failed and we were unable to recover it. 00:35:09.731 [2024-05-15 15:53:22.716411] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.731 [2024-05-15 15:53:22.716537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.731 [2024-05-15 15:53:22.716563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.731 [2024-05-15 15:53:22.716578] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.731 [2024-05-15 15:53:22.716592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.731 [2024-05-15 15:53:22.716620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.731 qpair failed and we were unable to recover it. 00:35:09.731 [2024-05-15 15:53:22.726339] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.731 [2024-05-15 15:53:22.726459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.731 [2024-05-15 15:53:22.726485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.731 [2024-05-15 15:53:22.726501] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.731 [2024-05-15 15:53:22.726513] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.731 [2024-05-15 15:53:22.726542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.731 qpair failed and we were unable to recover it. 00:35:09.731 [2024-05-15 15:53:22.736383] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.731 [2024-05-15 15:53:22.736507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.731 [2024-05-15 15:53:22.736534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.731 [2024-05-15 15:53:22.736549] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.731 [2024-05-15 15:53:22.736562] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.731 [2024-05-15 15:53:22.736590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.731 qpair failed and we were unable to recover it. 00:35:09.731 [2024-05-15 15:53:22.746415] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.731 [2024-05-15 15:53:22.746550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.731 [2024-05-15 15:53:22.746577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.731 [2024-05-15 15:53:22.746597] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.731 [2024-05-15 15:53:22.746611] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.731 [2024-05-15 15:53:22.746640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.731 qpair failed and we were unable to recover it. 00:35:09.731 [2024-05-15 15:53:22.756463] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.731 [2024-05-15 15:53:22.756585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.731 [2024-05-15 15:53:22.756612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.731 [2024-05-15 15:53:22.756626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.731 [2024-05-15 15:53:22.756639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.731 [2024-05-15 15:53:22.756668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.731 qpair failed and we were unable to recover it. 00:35:09.731 [2024-05-15 15:53:22.766490] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.731 [2024-05-15 15:53:22.766637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.731 [2024-05-15 15:53:22.766664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.731 [2024-05-15 15:53:22.766679] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.731 [2024-05-15 15:53:22.766692] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.731 [2024-05-15 15:53:22.766721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.731 qpair failed and we were unable to recover it. 00:35:09.731 [2024-05-15 15:53:22.776479] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.731 [2024-05-15 15:53:22.776606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.731 [2024-05-15 15:53:22.776633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.731 [2024-05-15 15:53:22.776649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.731 [2024-05-15 15:53:22.776661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.731 [2024-05-15 15:53:22.776690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.731 qpair failed and we were unable to recover it. 00:35:09.731 [2024-05-15 15:53:22.786606] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.731 [2024-05-15 15:53:22.786729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.731 [2024-05-15 15:53:22.786754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.731 [2024-05-15 15:53:22.786769] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.731 [2024-05-15 15:53:22.786781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.731 [2024-05-15 15:53:22.786809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.731 qpair failed and we were unable to recover it. 00:35:09.731 [2024-05-15 15:53:22.796536] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.731 [2024-05-15 15:53:22.796659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.731 [2024-05-15 15:53:22.796684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.731 [2024-05-15 15:53:22.796699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.731 [2024-05-15 15:53:22.796711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.731 [2024-05-15 15:53:22.796739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.731 qpair failed and we were unable to recover it. 00:35:09.731 [2024-05-15 15:53:22.806566] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.731 [2024-05-15 15:53:22.806694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.731 [2024-05-15 15:53:22.806721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.731 [2024-05-15 15:53:22.806743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.731 [2024-05-15 15:53:22.806756] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.731 [2024-05-15 15:53:22.806784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.731 qpair failed and we were unable to recover it. 00:35:09.731 [2024-05-15 15:53:22.816589] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.731 [2024-05-15 15:53:22.816708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.731 [2024-05-15 15:53:22.816734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.731 [2024-05-15 15:53:22.816749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.731 [2024-05-15 15:53:22.816762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.731 [2024-05-15 15:53:22.816790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.731 qpair failed and we were unable to recover it. 00:35:09.731 [2024-05-15 15:53:22.826619] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.731 [2024-05-15 15:53:22.826790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.731 [2024-05-15 15:53:22.826816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.731 [2024-05-15 15:53:22.826831] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.731 [2024-05-15 15:53:22.826844] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.731 [2024-05-15 15:53:22.826872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.731 qpair failed and we were unable to recover it. 00:35:09.991 [2024-05-15 15:53:22.836645] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.991 [2024-05-15 15:53:22.836786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.991 [2024-05-15 15:53:22.836827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.991 [2024-05-15 15:53:22.836843] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.991 [2024-05-15 15:53:22.836856] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.991 [2024-05-15 15:53:22.836884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.991 qpair failed and we were unable to recover it. 00:35:09.991 [2024-05-15 15:53:22.846692] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.991 [2024-05-15 15:53:22.846828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.991 [2024-05-15 15:53:22.846856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.991 [2024-05-15 15:53:22.846871] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.991 [2024-05-15 15:53:22.846884] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.991 [2024-05-15 15:53:22.846920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.991 qpair failed and we were unable to recover it. 00:35:09.991 [2024-05-15 15:53:22.856733] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.991 [2024-05-15 15:53:22.856870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.991 [2024-05-15 15:53:22.856904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.991 [2024-05-15 15:53:22.856922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.991 [2024-05-15 15:53:22.856935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.991 [2024-05-15 15:53:22.856964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.991 qpair failed and we were unable to recover it. 00:35:09.991 [2024-05-15 15:53:22.866766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.991 [2024-05-15 15:53:22.866885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.991 [2024-05-15 15:53:22.866912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.991 [2024-05-15 15:53:22.866927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.991 [2024-05-15 15:53:22.866940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.991 [2024-05-15 15:53:22.866968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.991 qpair failed and we were unable to recover it. 00:35:09.991 [2024-05-15 15:53:22.876791] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.991 [2024-05-15 15:53:22.876910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.991 [2024-05-15 15:53:22.876936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.991 [2024-05-15 15:53:22.876951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.991 [2024-05-15 15:53:22.876964] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.991 [2024-05-15 15:53:22.876992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.991 qpair failed and we were unable to recover it. 00:35:09.991 [2024-05-15 15:53:22.886819] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.991 [2024-05-15 15:53:22.886988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.991 [2024-05-15 15:53:22.887015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.991 [2024-05-15 15:53:22.887030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.991 [2024-05-15 15:53:22.887042] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.991 [2024-05-15 15:53:22.887071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.991 qpair failed and we were unable to recover it. 00:35:09.991 [2024-05-15 15:53:22.896801] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.991 [2024-05-15 15:53:22.896919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.991 [2024-05-15 15:53:22.896945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.991 [2024-05-15 15:53:22.896960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.991 [2024-05-15 15:53:22.896973] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.991 [2024-05-15 15:53:22.897002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.991 qpair failed and we were unable to recover it. 00:35:09.991 [2024-05-15 15:53:22.906852] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.991 [2024-05-15 15:53:22.906979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.991 [2024-05-15 15:53:22.907005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.991 [2024-05-15 15:53:22.907020] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.992 [2024-05-15 15:53:22.907033] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.992 [2024-05-15 15:53:22.907061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.992 qpair failed and we were unable to recover it. 00:35:09.992 [2024-05-15 15:53:22.916891] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.992 [2024-05-15 15:53:22.917021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.992 [2024-05-15 15:53:22.917047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.992 [2024-05-15 15:53:22.917063] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.992 [2024-05-15 15:53:22.917076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.992 [2024-05-15 15:53:22.917104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.992 qpair failed and we were unable to recover it. 00:35:09.992 [2024-05-15 15:53:22.926924] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.992 [2024-05-15 15:53:22.927048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.992 [2024-05-15 15:53:22.927079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.992 [2024-05-15 15:53:22.927095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.992 [2024-05-15 15:53:22.927108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.992 [2024-05-15 15:53:22.927136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.992 qpair failed and we were unable to recover it. 00:35:09.992 [2024-05-15 15:53:22.936961] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.992 [2024-05-15 15:53:22.937083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.992 [2024-05-15 15:53:22.937109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.992 [2024-05-15 15:53:22.937124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.992 [2024-05-15 15:53:22.937137] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.992 [2024-05-15 15:53:22.937165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.992 qpair failed and we were unable to recover it. 00:35:09.992 [2024-05-15 15:53:22.946947] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.992 [2024-05-15 15:53:22.947066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.992 [2024-05-15 15:53:22.947092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.992 [2024-05-15 15:53:22.947108] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.992 [2024-05-15 15:53:22.947120] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.992 [2024-05-15 15:53:22.947149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.992 qpair failed and we were unable to recover it. 00:35:09.992 [2024-05-15 15:53:22.957026] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.992 [2024-05-15 15:53:22.957153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.992 [2024-05-15 15:53:22.957179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.992 [2024-05-15 15:53:22.957197] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.992 [2024-05-15 15:53:22.957210] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.992 [2024-05-15 15:53:22.957248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.992 qpair failed and we were unable to recover it. 00:35:09.992 [2024-05-15 15:53:22.967114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.992 [2024-05-15 15:53:22.967248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.992 [2024-05-15 15:53:22.967275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.992 [2024-05-15 15:53:22.967290] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.992 [2024-05-15 15:53:22.967303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.992 [2024-05-15 15:53:22.967337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.992 qpair failed and we were unable to recover it. 00:35:09.992 [2024-05-15 15:53:22.977053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.992 [2024-05-15 15:53:22.977190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.992 [2024-05-15 15:53:22.977225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.992 [2024-05-15 15:53:22.977247] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.992 [2024-05-15 15:53:22.977260] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.992 [2024-05-15 15:53:22.977290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.992 qpair failed and we were unable to recover it. 00:35:09.992 [2024-05-15 15:53:22.987060] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.992 [2024-05-15 15:53:22.987177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.992 [2024-05-15 15:53:22.987204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.992 [2024-05-15 15:53:22.987227] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.992 [2024-05-15 15:53:22.987241] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.992 [2024-05-15 15:53:22.987270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.992 qpair failed and we were unable to recover it. 00:35:09.992 [2024-05-15 15:53:22.997198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.992 [2024-05-15 15:53:22.997339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.992 [2024-05-15 15:53:22.997366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.992 [2024-05-15 15:53:22.997381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.992 [2024-05-15 15:53:22.997394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.992 [2024-05-15 15:53:22.997422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.992 qpair failed and we were unable to recover it. 00:35:09.992 [2024-05-15 15:53:23.007142] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.992 [2024-05-15 15:53:23.007276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.992 [2024-05-15 15:53:23.007303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.992 [2024-05-15 15:53:23.007318] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.992 [2024-05-15 15:53:23.007331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb23e50 00:35:09.992 [2024-05-15 15:53:23.007359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:09.992 qpair failed and we were unable to recover it. 00:35:09.992 [2024-05-15 15:53:23.017245] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.992 [2024-05-15 15:53:23.017371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.992 [2024-05-15 15:53:23.017410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.992 [2024-05-15 15:53:23.017428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.992 [2024-05-15 15:53:23.017441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1448000b90 00:35:09.992 [2024-05-15 15:53:23.017472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:09.992 qpair failed and we were unable to recover it. 00:35:09.992 [2024-05-15 15:53:23.027296] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.992 [2024-05-15 15:53:23.027407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.992 [2024-05-15 15:53:23.027436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.992 [2024-05-15 15:53:23.027452] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.992 [2024-05-15 15:53:23.027464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1448000b90 00:35:09.992 [2024-05-15 15:53:23.027495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:09.992 qpair failed and we were unable to recover it. 00:35:09.992 [2024-05-15 15:53:23.037275] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.992 [2024-05-15 15:53:23.037397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.992 [2024-05-15 15:53:23.037427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.993 [2024-05-15 15:53:23.037443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.993 [2024-05-15 15:53:23.037456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1440000b90 00:35:09.993 [2024-05-15 15:53:23.037487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:09.993 qpair failed and we were unable to recover it. 00:35:09.993 [2024-05-15 15:53:23.047275] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.993 [2024-05-15 15:53:23.047396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.993 [2024-05-15 15:53:23.047423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.993 [2024-05-15 15:53:23.047438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.993 [2024-05-15 15:53:23.047450] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1440000b90 00:35:09.993 [2024-05-15 15:53:23.047480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:09.993 qpair failed and we were unable to recover it. 00:35:09.993 [2024-05-15 15:53:23.047613] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:35:09.993 A controller has encountered a failure and is being reset. 00:35:09.993 [2024-05-15 15:53:23.057309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.993 [2024-05-15 15:53:23.057473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.993 [2024-05-15 15:53:23.057504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.993 [2024-05-15 15:53:23.057526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.993 [2024-05-15 15:53:23.057540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1438000b90 00:35:09.993 [2024-05-15 15:53:23.057571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:09.993 qpair failed and we were unable to recover it. 00:35:09.993 [2024-05-15 15:53:23.067341] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:09.993 [2024-05-15 15:53:23.067500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:09.993 [2024-05-15 15:53:23.067527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:09.993 [2024-05-15 15:53:23.067543] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:09.993 [2024-05-15 15:53:23.067555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1438000b90 00:35:09.993 [2024-05-15 15:53:23.067585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:09.993 qpair failed and we were unable to recover it. 00:35:09.993 [2024-05-15 15:53:23.067695] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb31970 (9): Bad file descriptor 00:35:10.251 Controller properly reset. 00:35:10.251 Initializing NVMe Controllers 00:35:10.251 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:10.251 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:10.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:35:10.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:35:10.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:35:10.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:35:10.251 Initialization complete. Launching workers. 00:35:10.251 Starting thread on core 1 00:35:10.251 Starting thread on core 2 00:35:10.251 Starting thread on core 3 00:35:10.251 Starting thread on core 0 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:35:10.251 00:35:10.251 real 0m10.861s 00:35:10.251 user 0m17.657s 00:35:10.251 sys 0m5.262s 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:10.251 ************************************ 00:35:10.251 END TEST nvmf_target_disconnect_tc2 00:35:10.251 ************************************ 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:10.251 rmmod nvme_tcp 00:35:10.251 rmmod nvme_fabrics 00:35:10.251 rmmod nvme_keyring 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1480241 ']' 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1480241 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 1480241 ']' 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 1480241 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1480241 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1480241' 00:35:10.251 killing process with pid 1480241 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 1480241 00:35:10.251 [2024-05-15 15:53:23.248767] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:35:10.251 15:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 1480241 00:35:10.510 15:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:10.510 15:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:10.510 15:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:10.510 15:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:10.510 15:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:10.510 15:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:10.510 15:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:10.510 15:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.040 15:53:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:13.040 00:35:13.040 real 0m16.095s 00:35:13.040 user 0m44.398s 00:35:13.040 sys 0m7.479s 00:35:13.040 15:53:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:13.040 15:53:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:13.040 ************************************ 00:35:13.040 END TEST nvmf_target_disconnect 00:35:13.040 ************************************ 00:35:13.040 15:53:25 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:35:13.040 15:53:25 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:13.040 15:53:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:13.040 15:53:25 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:35:13.040 00:35:13.040 real 26m58.373s 00:35:13.040 user 72m42.113s 00:35:13.040 sys 6m28.130s 00:35:13.040 15:53:25 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:13.040 15:53:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:13.040 ************************************ 00:35:13.040 END TEST nvmf_tcp 00:35:13.040 ************************************ 00:35:13.040 15:53:25 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:35:13.040 15:53:25 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:13.040 15:53:25 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:35:13.040 15:53:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:13.040 15:53:25 -- common/autotest_common.sh@10 -- # set +x 00:35:13.040 ************************************ 00:35:13.040 START TEST spdkcli_nvmf_tcp 00:35:13.040 ************************************ 00:35:13.040 15:53:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:13.040 * Looking for test storage... 00:35:13.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1481434 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1481434 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 1481434 ']' 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:13.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:13.041 15:53:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:13.041 [2024-05-15 15:53:25.767420] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:35:13.041 [2024-05-15 15:53:25.767526] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1481434 ] 00:35:13.041 EAL: No free 2048 kB hugepages reported on node 1 00:35:13.041 [2024-05-15 15:53:25.803036] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:13.041 [2024-05-15 15:53:25.836056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:13.041 [2024-05-15 15:53:25.923386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:13.041 [2024-05-15 15:53:25.923392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:13.041 15:53:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:13.041 15:53:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:35:13.041 15:53:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:13.041 15:53:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:13.041 15:53:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:13.041 15:53:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:13.041 15:53:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:13.041 15:53:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:13.041 15:53:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:13.041 15:53:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:13.041 15:53:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:13.041 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:13.041 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:13.041 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:13.041 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:13.041 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:13.041 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:13.041 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:13.041 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:13.041 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:13.041 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:13.041 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:13.041 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:13.041 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:13.041 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:13.041 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:13.041 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:13.041 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:13.041 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:13.041 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:13.041 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:13.041 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:13.041 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:13.041 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:13.041 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:13.041 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:13.041 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:13.041 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:13.041 ' 00:35:15.567 [2024-05-15 15:53:28.610794] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:16.968 [2024-05-15 15:53:29.850671] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:35:16.968 [2024-05-15 15:53:29.851373] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:19.548 [2024-05-15 15:53:32.142468] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:21.446 [2024-05-15 15:53:34.092436] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:22.819 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:22.819 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:22.819 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:22.819 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:22.819 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:22.819 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:22.819 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:22.819 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:22.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:22.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:22.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:22.819 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:22.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:22.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:22.819 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:22.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:22.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:22.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:22.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:22.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:22.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:22.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:22.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:22.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:22.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:22.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:22.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:22.819 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:22.819 15:53:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:22.819 15:53:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:22.819 15:53:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:22.819 15:53:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:22.819 15:53:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:22.819 15:53:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:22.819 15:53:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:22.819 15:53:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:23.077 15:53:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:23.335 15:53:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:23.335 15:53:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:23.335 15:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:23.335 15:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:23.335 15:53:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:23.335 15:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:23.335 15:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:23.335 15:53:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:23.335 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:23.335 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:23.335 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:23.335 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:23.335 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:23.335 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:23.335 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:23.335 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:23.335 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:23.335 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:23.335 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:23.335 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:23.335 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:23.335 ' 00:35:28.599 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:28.599 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:28.599 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:28.599 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:28.599 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:28.599 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:28.599 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:28.599 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:28.599 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:28.599 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:28.599 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:28.599 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:28.599 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:28.599 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1481434 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 1481434 ']' 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 1481434 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1481434 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1481434' 00:35:28.599 killing process with pid 1481434 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 1481434 00:35:28.599 [2024-05-15 15:53:41.463371] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 1481434 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1481434 ']' 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1481434 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 1481434 ']' 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 1481434 00:35:28.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1481434) - No such process 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 1481434 is not found' 00:35:28.599 Process with pid 1481434 is not found 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:28.599 00:35:28.599 real 0m16.017s 00:35:28.599 user 0m33.868s 00:35:28.599 sys 0m0.819s 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:28.599 15:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:28.599 ************************************ 00:35:28.599 END TEST spdkcli_nvmf_tcp 00:35:28.599 ************************************ 00:35:28.599 15:53:41 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:28.599 15:53:41 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:35:28.599 15:53:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:28.599 15:53:41 -- common/autotest_common.sh@10 -- # set +x 00:35:28.858 ************************************ 00:35:28.858 START TEST nvmf_identify_passthru 00:35:28.858 ************************************ 00:35:28.858 15:53:41 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:28.858 * Looking for test storage... 00:35:28.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:28.858 15:53:41 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:28.858 15:53:41 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:28.858 15:53:41 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:28.858 15:53:41 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:28.858 15:53:41 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.858 15:53:41 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.858 15:53:41 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.858 15:53:41 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:28.858 15:53:41 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:28.858 15:53:41 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:28.858 15:53:41 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:28.858 15:53:41 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:28.858 15:53:41 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:28.858 15:53:41 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.858 15:53:41 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.858 15:53:41 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.858 15:53:41 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:28.858 15:53:41 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.858 15:53:41 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:28.858 15:53:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:28.858 15:53:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:28.858 15:53:41 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:35:28.858 15:53:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:31.388 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:31.388 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:35:31.388 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:31.388 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:31.388 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:31.388 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:31.388 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:31.388 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:35:31.388 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:31.388 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:35:31.388 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:35:31.388 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:35:31.388 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:35:31.388 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:35:31.388 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:35:31.388 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:31.388 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:31.388 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:31.388 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:31.388 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:31.388 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:31.388 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:31.388 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:31.388 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:35:31.389 Found 0000:09:00.0 (0x8086 - 0x159b) 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:35:31.389 Found 0000:09:00.1 (0x8086 - 0x159b) 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:35:31.389 Found net devices under 0000:09:00.0: cvl_0_0 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:35:31.389 Found net devices under 0000:09:00.1: cvl_0_1 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:31.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:31.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:35:31.389 00:35:31.389 --- 10.0.0.2 ping statistics --- 00:35:31.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.389 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:31.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:31.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:35:31.389 00:35:31.389 --- 10.0.0.1 ping statistics --- 00:35:31.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.389 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:31.389 15:53:44 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:31.389 15:53:44 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:31.389 15:53:44 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:31.389 15:53:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:31.389 15:53:44 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:31.389 15:53:44 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:35:31.389 15:53:44 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:35:31.389 15:53:44 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:35:31.389 15:53:44 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:35:31.389 15:53:44 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:35:31.389 15:53:44 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:35:31.389 15:53:44 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:31.389 15:53:44 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:31.389 15:53:44 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:35:31.389 15:53:44 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:35:31.389 15:53:44 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:0b:00.0 00:35:31.389 15:53:44 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:0b:00.0 00:35:31.389 15:53:44 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:35:31.389 15:53:44 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:35:31.389 15:53:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:35:31.389 15:53:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:31.389 15:53:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:31.389 EAL: No free 2048 kB hugepages reported on node 1 00:35:35.575 15:53:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:35:35.575 15:53:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:35:35.575 15:53:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:35.575 15:53:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:35.575 EAL: No free 2048 kB hugepages reported on node 1 00:35:39.759 15:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:35:39.759 15:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:39.759 15:53:52 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:39.759 15:53:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:39.759 15:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:39.759 15:53:52 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:39.759 15:53:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:39.759 15:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1486338 00:35:39.759 15:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:39.759 15:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:39.759 15:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1486338 00:35:39.759 15:53:52 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 1486338 ']' 00:35:39.759 15:53:52 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:39.759 15:53:52 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:39.760 15:53:52 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:39.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:39.760 15:53:52 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:39.760 15:53:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:39.760 [2024-05-15 15:53:52.719196] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:35:39.760 [2024-05-15 15:53:52.719303] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:39.760 EAL: No free 2048 kB hugepages reported on node 1 00:35:39.760 [2024-05-15 15:53:52.765085] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:39.760 [2024-05-15 15:53:52.800083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:40.018 [2024-05-15 15:53:52.889125] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:40.018 [2024-05-15 15:53:52.889175] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:40.018 [2024-05-15 15:53:52.889192] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:40.018 [2024-05-15 15:53:52.889205] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:40.018 [2024-05-15 15:53:52.889225] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:40.018 [2024-05-15 15:53:52.889292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:40.018 [2024-05-15 15:53:52.889326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:40.018 [2024-05-15 15:53:52.889349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:40.018 [2024-05-15 15:53:52.889352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:40.018 15:53:52 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:40.018 15:53:52 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:35:40.018 15:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:40.018 15:53:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.018 15:53:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:40.018 INFO: Log level set to 20 00:35:40.018 INFO: Requests: 00:35:40.018 { 00:35:40.018 "jsonrpc": "2.0", 00:35:40.018 "method": "nvmf_set_config", 00:35:40.018 "id": 1, 00:35:40.018 "params": { 00:35:40.018 "admin_cmd_passthru": { 00:35:40.018 "identify_ctrlr": true 00:35:40.018 } 00:35:40.018 } 00:35:40.018 } 00:35:40.018 00:35:40.018 INFO: response: 00:35:40.018 { 00:35:40.018 "jsonrpc": "2.0", 00:35:40.018 "id": 1, 00:35:40.018 "result": true 00:35:40.018 } 00:35:40.018 00:35:40.018 15:53:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.018 15:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:40.018 15:53:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.018 15:53:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:40.018 INFO: Setting log level to 20 00:35:40.018 INFO: Setting log level to 20 00:35:40.018 INFO: Log level set to 20 00:35:40.018 INFO: Log level set to 20 00:35:40.018 INFO: Requests: 00:35:40.018 { 00:35:40.018 "jsonrpc": "2.0", 00:35:40.018 "method": "framework_start_init", 00:35:40.018 "id": 1 00:35:40.018 } 00:35:40.018 00:35:40.018 INFO: Requests: 00:35:40.018 { 00:35:40.018 "jsonrpc": "2.0", 00:35:40.018 "method": "framework_start_init", 00:35:40.018 "id": 1 00:35:40.018 } 00:35:40.018 00:35:40.018 [2024-05-15 15:53:53.053413] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:40.018 INFO: response: 00:35:40.018 { 00:35:40.018 "jsonrpc": "2.0", 00:35:40.018 "id": 1, 00:35:40.018 "result": true 00:35:40.018 } 00:35:40.018 00:35:40.018 INFO: response: 00:35:40.018 { 00:35:40.018 "jsonrpc": "2.0", 00:35:40.018 "id": 1, 00:35:40.018 "result": true 00:35:40.018 } 00:35:40.018 00:35:40.018 15:53:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.018 15:53:53 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:40.018 15:53:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.018 15:53:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:40.018 INFO: Setting log level to 40 00:35:40.018 INFO: Setting log level to 40 00:35:40.018 INFO: Setting log level to 40 00:35:40.018 [2024-05-15 15:53:53.063249] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:40.018 15:53:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.018 15:53:53 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:40.018 15:53:53 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:40.018 15:53:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:40.018 15:53:53 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:35:40.018 15:53:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.018 15:53:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:43.297 Nvme0n1 00:35:43.297 15:53:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.297 15:53:55 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:43.297 15:53:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.297 15:53:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:43.297 15:53:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.297 15:53:55 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:43.297 15:53:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.297 15:53:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:43.297 15:53:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.297 15:53:55 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:43.297 15:53:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.297 15:53:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:43.297 [2024-05-15 15:53:55.954720] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:35:43.297 [2024-05-15 15:53:55.955019] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:43.297 15:53:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.297 15:53:55 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:43.297 15:53:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.297 15:53:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:43.297 [ 00:35:43.297 { 00:35:43.297 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:43.297 "subtype": "Discovery", 00:35:43.297 "listen_addresses": [], 00:35:43.297 "allow_any_host": true, 00:35:43.297 "hosts": [] 00:35:43.297 }, 00:35:43.297 { 00:35:43.297 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:43.297 "subtype": "NVMe", 00:35:43.297 "listen_addresses": [ 00:35:43.297 { 00:35:43.297 "trtype": "TCP", 00:35:43.297 "adrfam": "IPv4", 00:35:43.297 "traddr": "10.0.0.2", 00:35:43.297 "trsvcid": "4420" 00:35:43.297 } 00:35:43.297 ], 00:35:43.297 "allow_any_host": true, 00:35:43.297 "hosts": [], 00:35:43.297 "serial_number": "SPDK00000000000001", 00:35:43.297 "model_number": "SPDK bdev Controller", 00:35:43.297 "max_namespaces": 1, 00:35:43.297 "min_cntlid": 1, 00:35:43.297 "max_cntlid": 65519, 00:35:43.297 "namespaces": [ 00:35:43.297 { 00:35:43.297 "nsid": 1, 00:35:43.297 "bdev_name": "Nvme0n1", 00:35:43.297 "name": "Nvme0n1", 00:35:43.297 "nguid": "FDD1497991804C24A0318031EBEA3240", 00:35:43.297 "uuid": "fdd14979-9180-4c24-a031-8031ebea3240" 00:35:43.297 } 00:35:43.297 ] 00:35:43.297 } 00:35:43.298 ] 00:35:43.298 15:53:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.298 15:53:55 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:43.298 15:53:55 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:43.298 15:53:55 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:43.298 EAL: No free 2048 kB hugepages reported on node 1 00:35:43.298 15:53:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:35:43.298 15:53:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:43.298 15:53:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:43.298 15:53:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:43.298 EAL: No free 2048 kB hugepages reported on node 1 00:35:43.556 15:53:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:43.556 15:53:56 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:35:43.556 15:53:56 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:43.556 15:53:56 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:43.556 15:53:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.556 15:53:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:43.556 15:53:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.556 15:53:56 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:43.556 15:53:56 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:43.556 15:53:56 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:43.556 15:53:56 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:35:43.556 15:53:56 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:43.556 15:53:56 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:35:43.556 15:53:56 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:43.556 15:53:56 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:43.556 rmmod nvme_tcp 00:35:43.556 rmmod nvme_fabrics 00:35:43.556 rmmod nvme_keyring 00:35:43.556 15:53:56 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:43.556 15:53:56 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:35:43.556 15:53:56 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:35:43.556 15:53:56 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1486338 ']' 00:35:43.556 15:53:56 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1486338 00:35:43.556 15:53:56 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 1486338 ']' 00:35:43.556 15:53:56 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 1486338 00:35:43.556 15:53:56 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:35:43.556 15:53:56 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:43.556 15:53:56 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1486338 00:35:43.556 15:53:56 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:43.556 15:53:56 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:43.556 15:53:56 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1486338' 00:35:43.556 killing process with pid 1486338 00:35:43.556 15:53:56 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 1486338 00:35:43.556 [2024-05-15 15:53:56.496373] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:35:43.556 15:53:56 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 1486338 00:35:44.930 15:53:57 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:44.931 15:53:57 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:44.931 15:53:57 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:44.931 15:53:57 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:44.931 15:53:57 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:44.931 15:53:57 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:44.931 15:53:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:44.931 15:53:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:47.504 15:54:00 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:47.504 00:35:47.504 real 0m18.321s 00:35:47.504 user 0m26.834s 00:35:47.504 sys 0m2.667s 00:35:47.504 15:54:00 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:47.504 15:54:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:47.504 ************************************ 00:35:47.504 END TEST nvmf_identify_passthru 00:35:47.504 ************************************ 00:35:47.504 15:54:00 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:47.504 15:54:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:47.504 15:54:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:47.504 15:54:00 -- common/autotest_common.sh@10 -- # set +x 00:35:47.504 ************************************ 00:35:47.504 START TEST nvmf_dif 00:35:47.504 ************************************ 00:35:47.504 15:54:00 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:47.504 * Looking for test storage... 00:35:47.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:47.504 15:54:00 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:47.504 15:54:00 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:47.504 15:54:00 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:47.504 15:54:00 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:47.504 15:54:00 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:47.504 15:54:00 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:47.504 15:54:00 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:47.504 15:54:00 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:47.504 15:54:00 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:47.504 15:54:00 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:47.504 15:54:00 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:47.504 15:54:00 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:47.504 15:54:00 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:35:47.504 15:54:00 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:35:47.504 15:54:00 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:47.504 15:54:00 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:47.504 15:54:00 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:47.504 15:54:00 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:47.504 15:54:00 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:47.504 15:54:00 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:47.504 15:54:00 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:47.504 15:54:00 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:47.504 15:54:00 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.504 15:54:00 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.504 15:54:00 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.504 15:54:00 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:47.504 15:54:00 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.504 15:54:00 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:47.504 15:54:00 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:47.504 15:54:00 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:47.505 15:54:00 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:47.505 15:54:00 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:47.505 15:54:00 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:47.505 15:54:00 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:47.505 15:54:00 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:47.505 15:54:00 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:47.505 15:54:00 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:47.505 15:54:00 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:47.505 15:54:00 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:47.505 15:54:00 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:47.505 15:54:00 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:47.505 15:54:00 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:47.505 15:54:00 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:47.505 15:54:00 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:47.505 15:54:00 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:47.505 15:54:00 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:47.505 15:54:00 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:47.505 15:54:00 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:47.505 15:54:00 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:47.505 15:54:00 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:47.505 15:54:00 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:47.505 15:54:00 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:47.505 15:54:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:49.405 15:54:02 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:49.405 15:54:02 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:49.405 15:54:02 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:49.405 15:54:02 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:49.405 15:54:02 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:49.405 15:54:02 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:35:49.664 Found 0000:09:00.0 (0x8086 - 0x159b) 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:35:49.664 Found 0000:09:00.1 (0x8086 - 0x159b) 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:35:49.664 Found net devices under 0000:09:00.0: cvl_0_0 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:35:49.664 Found net devices under 0000:09:00.1: cvl_0_1 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:49.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:49.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:35:49.664 00:35:49.664 --- 10.0.0.2 ping statistics --- 00:35:49.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:49.664 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:49.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:49.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:35:49.664 00:35:49.664 --- 10.0.0.1 ping statistics --- 00:35:49.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:49.664 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:49.664 15:54:02 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:51.042 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:51.042 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:51.042 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:51.042 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:51.042 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:51.042 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:51.042 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:51.042 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:51.042 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:51.042 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:51.042 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:51.042 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:51.042 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:51.042 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:51.042 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:51.042 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:51.042 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:51.042 15:54:04 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:51.042 15:54:04 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:51.042 15:54:04 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:51.043 15:54:04 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:51.043 15:54:04 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:51.043 15:54:04 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:51.300 15:54:04 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:51.300 15:54:04 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:51.300 15:54:04 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:51.300 15:54:04 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:51.300 15:54:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:51.300 15:54:04 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1490101 00:35:51.300 15:54:04 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:51.300 15:54:04 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1490101 00:35:51.300 15:54:04 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 1490101 ']' 00:35:51.300 15:54:04 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:51.300 15:54:04 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:51.300 15:54:04 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:51.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:51.300 15:54:04 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:51.300 15:54:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:51.300 [2024-05-15 15:54:04.202663] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:35:51.300 [2024-05-15 15:54:04.202733] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:51.300 EAL: No free 2048 kB hugepages reported on node 1 00:35:51.300 [2024-05-15 15:54:04.245415] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:51.300 [2024-05-15 15:54:04.277965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:51.300 [2024-05-15 15:54:04.359712] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:51.300 [2024-05-15 15:54:04.359782] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:51.300 [2024-05-15 15:54:04.359797] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:51.300 [2024-05-15 15:54:04.359808] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:51.300 [2024-05-15 15:54:04.359818] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:51.300 [2024-05-15 15:54:04.359858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:51.558 15:54:04 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:51.558 15:54:04 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:35:51.558 15:54:04 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:51.558 15:54:04 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:51.558 15:54:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:51.558 15:54:04 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:51.558 15:54:04 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:51.558 15:54:04 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:51.558 15:54:04 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.558 15:54:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:51.558 [2024-05-15 15:54:04.503776] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:51.558 15:54:04 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.558 15:54:04 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:51.558 15:54:04 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:51.558 15:54:04 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:51.558 15:54:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:51.558 ************************************ 00:35:51.558 START TEST fio_dif_1_default 00:35:51.558 ************************************ 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:51.558 bdev_null0 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:51.558 [2024-05-15 15:54:04.571905] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:35:51.558 [2024-05-15 15:54:04.572167] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:51.558 { 00:35:51.558 "params": { 00:35:51.558 "name": "Nvme$subsystem", 00:35:51.558 "trtype": "$TEST_TRANSPORT", 00:35:51.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:51.558 "adrfam": "ipv4", 00:35:51.558 "trsvcid": "$NVMF_PORT", 00:35:51.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:51.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:51.558 "hdgst": ${hdgst:-false}, 00:35:51.558 "ddgst": ${ddgst:-false} 00:35:51.558 }, 00:35:51.558 "method": "bdev_nvme_attach_controller" 00:35:51.558 } 00:35:51.558 EOF 00:35:51.558 )") 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:51.558 "params": { 00:35:51.558 "name": "Nvme0", 00:35:51.558 "trtype": "tcp", 00:35:51.558 "traddr": "10.0.0.2", 00:35:51.558 "adrfam": "ipv4", 00:35:51.558 "trsvcid": "4420", 00:35:51.558 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:51.558 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:51.558 "hdgst": false, 00:35:51.558 "ddgst": false 00:35:51.558 }, 00:35:51.558 "method": "bdev_nvme_attach_controller" 00:35:51.558 }' 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:51.558 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:51.559 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:51.559 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:51.559 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:51.559 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:51.559 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:51.559 15:54:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:51.816 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:51.816 fio-3.35 00:35:51.816 Starting 1 thread 00:35:51.816 EAL: No free 2048 kB hugepages reported on node 1 00:36:04.013 00:36:04.013 filename0: (groupid=0, jobs=1): err= 0: pid=1490329: Wed May 15 15:54:15 2024 00:36:04.013 read: IOPS=189, BW=757KiB/s (775kB/s)(7600KiB/10038msec) 00:36:04.013 slat (nsec): min=6934, max=64595, avg=9981.89, stdev=4812.44 00:36:04.013 clat (usec): min=673, max=43775, avg=21102.20, stdev=20223.57 00:36:04.013 lat (usec): min=680, max=43811, avg=21112.18, stdev=20224.50 00:36:04.013 clat percentiles (usec): 00:36:04.013 | 1.00th=[ 701], 5.00th=[ 725], 10.00th=[ 734], 20.00th=[ 750], 00:36:04.013 | 30.00th=[ 758], 40.00th=[ 775], 50.00th=[41157], 60.00th=[41157], 00:36:04.013 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:04.013 | 99.00th=[41157], 99.50th=[41157], 99.90th=[43779], 99.95th=[43779], 00:36:04.013 | 99.99th=[43779] 00:36:04.013 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=758.40, stdev=23.45, samples=20 00:36:04.013 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:36:04.013 lat (usec) : 750=23.84%, 1000=25.84% 00:36:04.013 lat (msec) : 50=50.32% 00:36:04.013 cpu : usr=89.78%, sys=9.93%, ctx=15, majf=0, minf=264 00:36:04.013 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:04.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.013 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.013 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:04.013 00:36:04.013 Run status group 0 (all jobs): 00:36:04.013 READ: bw=757KiB/s (775kB/s), 757KiB/s-757KiB/s (775kB/s-775kB/s), io=7600KiB (7782kB), run=10038-10038msec 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.013 00:36:04.013 real 0m11.185s 00:36:04.013 user 0m10.095s 00:36:04.013 sys 0m1.260s 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:04.013 ************************************ 00:36:04.013 END TEST fio_dif_1_default 00:36:04.013 ************************************ 00:36:04.013 15:54:15 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:04.013 15:54:15 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:04.013 15:54:15 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:04.013 15:54:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:04.013 ************************************ 00:36:04.013 START TEST fio_dif_1_multi_subsystems 00:36:04.013 ************************************ 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:04.013 bdev_null0 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:04.013 [2024-05-15 15:54:15.814429] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:04.013 bdev_null1 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:36:04.013 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:04.013 { 00:36:04.013 "params": { 00:36:04.014 "name": "Nvme$subsystem", 00:36:04.014 "trtype": "$TEST_TRANSPORT", 00:36:04.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:04.014 "adrfam": "ipv4", 00:36:04.014 "trsvcid": "$NVMF_PORT", 00:36:04.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:04.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:04.014 "hdgst": ${hdgst:-false}, 00:36:04.014 "ddgst": ${ddgst:-false} 00:36:04.014 }, 00:36:04.014 "method": "bdev_nvme_attach_controller" 00:36:04.014 } 00:36:04.014 EOF 00:36:04.014 )") 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:04.014 { 00:36:04.014 "params": { 00:36:04.014 "name": "Nvme$subsystem", 00:36:04.014 "trtype": "$TEST_TRANSPORT", 00:36:04.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:04.014 "adrfam": "ipv4", 00:36:04.014 "trsvcid": "$NVMF_PORT", 00:36:04.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:04.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:04.014 "hdgst": ${hdgst:-false}, 00:36:04.014 "ddgst": ${ddgst:-false} 00:36:04.014 }, 00:36:04.014 "method": "bdev_nvme_attach_controller" 00:36:04.014 } 00:36:04.014 EOF 00:36:04.014 )") 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:04.014 "params": { 00:36:04.014 "name": "Nvme0", 00:36:04.014 "trtype": "tcp", 00:36:04.014 "traddr": "10.0.0.2", 00:36:04.014 "adrfam": "ipv4", 00:36:04.014 "trsvcid": "4420", 00:36:04.014 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:04.014 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:04.014 "hdgst": false, 00:36:04.014 "ddgst": false 00:36:04.014 }, 00:36:04.014 "method": "bdev_nvme_attach_controller" 00:36:04.014 },{ 00:36:04.014 "params": { 00:36:04.014 "name": "Nvme1", 00:36:04.014 "trtype": "tcp", 00:36:04.014 "traddr": "10.0.0.2", 00:36:04.014 "adrfam": "ipv4", 00:36:04.014 "trsvcid": "4420", 00:36:04.014 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:04.014 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:04.014 "hdgst": false, 00:36:04.014 "ddgst": false 00:36:04.014 }, 00:36:04.014 "method": "bdev_nvme_attach_controller" 00:36:04.014 }' 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:04.014 15:54:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.014 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:04.014 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:04.014 fio-3.35 00:36:04.014 Starting 2 threads 00:36:04.014 EAL: No free 2048 kB hugepages reported on node 1 00:36:13.980 00:36:13.980 filename0: (groupid=0, jobs=1): err= 0: pid=1492233: Wed May 15 15:54:26 2024 00:36:13.980 read: IOPS=95, BW=382KiB/s (392kB/s)(3840KiB/10040msec) 00:36:13.980 slat (nsec): min=7237, max=32136, avg=9501.91, stdev=3231.43 00:36:13.980 clat (usec): min=40856, max=46649, avg=41802.30, stdev=534.36 00:36:13.980 lat (usec): min=40868, max=46669, avg=41811.80, stdev=534.61 00:36:13.980 clat percentiles (usec): 00:36:13.980 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:13.980 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:36:13.980 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:13.980 | 99.00th=[42730], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:36:13.980 | 99.99th=[46400] 00:36:13.980 bw ( KiB/s): min= 352, max= 384, per=33.67%, avg=382.40, stdev= 7.16, samples=20 00:36:13.980 iops : min= 88, max= 96, avg=95.60, stdev= 1.79, samples=20 00:36:13.980 lat (msec) : 50=100.00% 00:36:13.980 cpu : usr=94.43%, sys=5.28%, ctx=21, majf=0, minf=69 00:36:13.980 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:13.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.980 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.980 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:13.980 filename1: (groupid=0, jobs=1): err= 0: pid=1492234: Wed May 15 15:54:26 2024 00:36:13.980 read: IOPS=188, BW=752KiB/s (770kB/s)(7552KiB/10039msec) 00:36:13.980 slat (nsec): min=7177, max=32698, avg=9167.78, stdev=2828.15 00:36:13.980 clat (usec): min=680, max=46409, avg=21239.66, stdev=20155.64 00:36:13.980 lat (usec): min=688, max=46429, avg=21248.83, stdev=20155.54 00:36:13.980 clat percentiles (usec): 00:36:13.980 | 1.00th=[ 717], 5.00th=[ 758], 10.00th=[ 824], 20.00th=[ 840], 00:36:13.980 | 30.00th=[ 848], 40.00th=[ 865], 50.00th=[41157], 60.00th=[41157], 00:36:13.980 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:13.980 | 99.00th=[41157], 99.50th=[41157], 99.90th=[46400], 99.95th=[46400], 00:36:13.980 | 99.99th=[46400] 00:36:13.980 bw ( KiB/s): min= 673, max= 768, per=66.36%, avg=753.65, stdev=30.08, samples=20 00:36:13.980 iops : min= 168, max= 192, avg=188.40, stdev= 7.56, samples=20 00:36:13.980 lat (usec) : 750=4.71%, 1000=44.65% 00:36:13.980 lat (msec) : 50=50.64% 00:36:13.980 cpu : usr=94.37%, sys=5.33%, ctx=17, majf=0, minf=173 00:36:13.980 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:13.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.980 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.980 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:13.980 00:36:13.980 Run status group 0 (all jobs): 00:36:13.980 READ: bw=1135KiB/s (1162kB/s), 382KiB/s-752KiB/s (392kB/s-770kB/s), io=11.1MiB (11.7MB), run=10039-10040msec 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.238 00:36:14.238 real 0m11.344s 00:36:14.238 user 0m20.205s 00:36:14.238 sys 0m1.380s 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:14.238 15:54:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:14.238 ************************************ 00:36:14.238 END TEST fio_dif_1_multi_subsystems 00:36:14.238 ************************************ 00:36:14.238 15:54:27 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:14.238 15:54:27 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:14.238 15:54:27 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:14.238 15:54:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:14.238 ************************************ 00:36:14.238 START TEST fio_dif_rand_params 00:36:14.238 ************************************ 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.238 bdev_null0 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.238 [2024-05-15 15:54:27.218322] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:14.238 { 00:36:14.238 "params": { 00:36:14.238 "name": "Nvme$subsystem", 00:36:14.238 "trtype": "$TEST_TRANSPORT", 00:36:14.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:14.238 "adrfam": "ipv4", 00:36:14.238 "trsvcid": "$NVMF_PORT", 00:36:14.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:14.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:14.238 "hdgst": ${hdgst:-false}, 00:36:14.238 "ddgst": ${ddgst:-false} 00:36:14.238 }, 00:36:14.238 "method": "bdev_nvme_attach_controller" 00:36:14.238 } 00:36:14.238 EOF 00:36:14.238 )") 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:14.238 "params": { 00:36:14.238 "name": "Nvme0", 00:36:14.238 "trtype": "tcp", 00:36:14.238 "traddr": "10.0.0.2", 00:36:14.238 "adrfam": "ipv4", 00:36:14.238 "trsvcid": "4420", 00:36:14.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:14.238 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:14.238 "hdgst": false, 00:36:14.238 "ddgst": false 00:36:14.238 }, 00:36:14.238 "method": "bdev_nvme_attach_controller" 00:36:14.238 }' 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:14.238 15:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:14.496 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:14.496 ... 00:36:14.496 fio-3.35 00:36:14.496 Starting 3 threads 00:36:14.496 EAL: No free 2048 kB hugepages reported on node 1 00:36:21.051 00:36:21.051 filename0: (groupid=0, jobs=1): err= 0: pid=1493616: Wed May 15 15:54:33 2024 00:36:21.051 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(131MiB/5044msec) 00:36:21.051 slat (nsec): min=7198, max=52592, avg=14370.56, stdev=4780.46 00:36:21.051 clat (usec): min=4312, max=91215, avg=14353.56, stdev=13396.71 00:36:21.051 lat (usec): min=4323, max=91229, avg=14367.93, stdev=13396.58 00:36:21.051 clat percentiles (usec): 00:36:21.051 | 1.00th=[ 5473], 5.00th=[ 5800], 10.00th=[ 6128], 20.00th=[ 7635], 00:36:21.051 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[11207], 00:36:21.051 | 70.00th=[12387], 80.00th=[13435], 90.00th=[46924], 95.00th=[51119], 00:36:21.051 | 99.00th=[54789], 99.50th=[55313], 99.90th=[88605], 99.95th=[90702], 00:36:21.051 | 99.99th=[90702] 00:36:21.051 bw ( KiB/s): min=20736, max=34304, per=34.44%, avg=26803.20, stdev=4778.74, samples=10 00:36:21.051 iops : min= 162, max= 268, avg=209.40, stdev=37.33, samples=10 00:36:21.051 lat (msec) : 10=50.29%, 20=38.95%, 50=4.57%, 100=6.19% 00:36:21.051 cpu : usr=93.79%, sys=5.77%, ctx=17, majf=0, minf=115 00:36:21.051 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:21.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:21.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:21.051 issued rwts: total=1050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:21.051 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:21.051 filename0: (groupid=0, jobs=1): err= 0: pid=1493617: Wed May 15 15:54:33 2024 00:36:21.051 read: IOPS=208, BW=26.1MiB/s (27.4MB/s)(131MiB/5003msec) 00:36:21.051 slat (nsec): min=5175, max=54580, avg=18406.53, stdev=6342.20 00:36:21.051 clat (usec): min=4423, max=55488, avg=14338.44, stdev=13057.97 00:36:21.051 lat (usec): min=4438, max=55516, avg=14356.84, stdev=13058.23 00:36:21.051 clat percentiles (usec): 00:36:21.051 | 1.00th=[ 5342], 5.00th=[ 5735], 10.00th=[ 6128], 20.00th=[ 7570], 00:36:21.051 | 30.00th=[ 8356], 40.00th=[ 8979], 50.00th=[ 9765], 60.00th=[11207], 00:36:21.051 | 70.00th=[12256], 80.00th=[13304], 90.00th=[46924], 95.00th=[50070], 00:36:21.051 | 99.00th=[52691], 99.50th=[53740], 99.90th=[55313], 99.95th=[55313], 00:36:21.051 | 99.99th=[55313] 00:36:21.051 bw ( KiB/s): min=19200, max=33024, per=34.27%, avg=26675.20, stdev=4855.46, samples=10 00:36:21.051 iops : min= 150, max= 258, avg=208.40, stdev=37.93, samples=10 00:36:21.051 lat (msec) : 10=51.39%, 20=37.13%, 50=7.18%, 100=4.31% 00:36:21.051 cpu : usr=90.98%, sys=6.66%, ctx=303, majf=0, minf=158 00:36:21.051 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:21.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:21.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:21.051 issued rwts: total=1045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:21.051 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:21.051 filename0: (groupid=0, jobs=1): err= 0: pid=1493618: Wed May 15 15:54:33 2024 00:36:21.051 read: IOPS=193, BW=24.2MiB/s (25.4MB/s)(122MiB/5011msec) 00:36:21.051 slat (nsec): min=6451, max=65559, avg=13990.14, stdev=4720.64 00:36:21.051 clat (usec): min=4870, max=87911, avg=15440.76, stdev=14246.71 00:36:21.051 lat (usec): min=4882, max=87923, avg=15454.75, stdev=14246.89 00:36:21.051 clat percentiles (usec): 00:36:21.051 | 1.00th=[ 5604], 5.00th=[ 5866], 10.00th=[ 6259], 20.00th=[ 7898], 00:36:21.051 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[10552], 60.00th=[11731], 00:36:21.051 | 70.00th=[12649], 80.00th=[14222], 90.00th=[49021], 95.00th=[51119], 00:36:21.051 | 99.00th=[54264], 99.50th=[56886], 99.90th=[87557], 99.95th=[87557], 00:36:21.051 | 99.99th=[87557] 00:36:21.051 bw ( KiB/s): min=14848, max=34304, per=31.88%, avg=24811.00, stdev=6028.21, samples=10 00:36:21.051 iops : min= 116, max= 268, avg=193.80, stdev=47.11, samples=10 00:36:21.051 lat (msec) : 10=45.78%, 20=41.15%, 50=5.66%, 100=7.41% 00:36:21.051 cpu : usr=94.65%, sys=4.93%, ctx=17, majf=0, minf=161 00:36:21.051 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:21.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:21.052 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:21.052 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:21.052 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:21.052 00:36:21.052 Run status group 0 (all jobs): 00:36:21.052 READ: bw=76.0MiB/s (79.7MB/s), 24.2MiB/s-26.1MiB/s (25.4MB/s-27.4MB/s), io=383MiB (402MB), run=5003-5044msec 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:21.052 bdev_null0 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:21.052 [2024-05-15 15:54:33.358204] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:21.052 bdev_null1 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:21.052 bdev_null2 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:21.052 { 00:36:21.052 "params": { 00:36:21.052 "name": "Nvme$subsystem", 00:36:21.052 "trtype": "$TEST_TRANSPORT", 00:36:21.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:21.052 "adrfam": "ipv4", 00:36:21.052 "trsvcid": "$NVMF_PORT", 00:36:21.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:21.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:21.052 "hdgst": ${hdgst:-false}, 00:36:21.052 "ddgst": ${ddgst:-false} 00:36:21.052 }, 00:36:21.052 "method": "bdev_nvme_attach_controller" 00:36:21.052 } 00:36:21.052 EOF 00:36:21.052 )") 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:21.052 { 00:36:21.052 "params": { 00:36:21.052 "name": "Nvme$subsystem", 00:36:21.052 "trtype": "$TEST_TRANSPORT", 00:36:21.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:21.052 "adrfam": "ipv4", 00:36:21.052 "trsvcid": "$NVMF_PORT", 00:36:21.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:21.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:21.052 "hdgst": ${hdgst:-false}, 00:36:21.052 "ddgst": ${ddgst:-false} 00:36:21.052 }, 00:36:21.052 "method": "bdev_nvme_attach_controller" 00:36:21.052 } 00:36:21.052 EOF 00:36:21.052 )") 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:21.052 { 00:36:21.052 "params": { 00:36:21.052 "name": "Nvme$subsystem", 00:36:21.052 "trtype": "$TEST_TRANSPORT", 00:36:21.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:21.052 "adrfam": "ipv4", 00:36:21.052 "trsvcid": "$NVMF_PORT", 00:36:21.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:21.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:21.052 "hdgst": ${hdgst:-false}, 00:36:21.052 "ddgst": ${ddgst:-false} 00:36:21.052 }, 00:36:21.052 "method": "bdev_nvme_attach_controller" 00:36:21.052 } 00:36:21.052 EOF 00:36:21.052 )") 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:21.052 "params": { 00:36:21.052 "name": "Nvme0", 00:36:21.052 "trtype": "tcp", 00:36:21.052 "traddr": "10.0.0.2", 00:36:21.052 "adrfam": "ipv4", 00:36:21.052 "trsvcid": "4420", 00:36:21.052 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:21.052 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:21.052 "hdgst": false, 00:36:21.052 "ddgst": false 00:36:21.052 }, 00:36:21.052 "method": "bdev_nvme_attach_controller" 00:36:21.052 },{ 00:36:21.052 "params": { 00:36:21.052 "name": "Nvme1", 00:36:21.052 "trtype": "tcp", 00:36:21.052 "traddr": "10.0.0.2", 00:36:21.052 "adrfam": "ipv4", 00:36:21.052 "trsvcid": "4420", 00:36:21.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:21.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:21.052 "hdgst": false, 00:36:21.052 "ddgst": false 00:36:21.052 }, 00:36:21.052 "method": "bdev_nvme_attach_controller" 00:36:21.052 },{ 00:36:21.052 "params": { 00:36:21.052 "name": "Nvme2", 00:36:21.052 "trtype": "tcp", 00:36:21.052 "traddr": "10.0.0.2", 00:36:21.052 "adrfam": "ipv4", 00:36:21.052 "trsvcid": "4420", 00:36:21.052 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:21.052 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:21.052 "hdgst": false, 00:36:21.052 "ddgst": false 00:36:21.052 }, 00:36:21.052 "method": "bdev_nvme_attach_controller" 00:36:21.052 }' 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:21.052 15:54:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:21.052 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:21.052 ... 00:36:21.052 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:21.052 ... 00:36:21.052 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:21.052 ... 00:36:21.052 fio-3.35 00:36:21.052 Starting 24 threads 00:36:21.052 EAL: No free 2048 kB hugepages reported on node 1 00:36:33.281 00:36:33.281 filename0: (groupid=0, jobs=1): err= 0: pid=1494375: Wed May 15 15:54:44 2024 00:36:33.281 read: IOPS=461, BW=1847KiB/s (1892kB/s)(18.1MiB/10012msec) 00:36:33.281 slat (usec): min=8, max=119, avg=53.60, stdev=27.33 00:36:33.281 clat (usec): min=25575, max=80729, avg=34167.64, stdev=2327.16 00:36:33.281 lat (usec): min=25591, max=80778, avg=34221.25, stdev=2323.62 00:36:33.281 clat percentiles (usec): 00:36:33.281 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:36:33.281 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:33.281 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[36439], 00:36:33.281 | 99.00th=[40633], 99.50th=[44827], 99.90th=[64226], 99.95th=[64226], 00:36:33.281 | 99.99th=[80217] 00:36:33.281 bw ( KiB/s): min= 1536, max= 1920, per=4.16%, avg=1845.89, stdev=97.35, samples=19 00:36:33.281 iops : min= 384, max= 480, avg=461.47, stdev=24.34, samples=19 00:36:33.281 lat (msec) : 50=99.65%, 100=0.35% 00:36:33.281 cpu : usr=95.06%, sys=2.72%, ctx=184, majf=0, minf=18 00:36:33.281 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:33.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.281 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.281 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:33.281 filename0: (groupid=0, jobs=1): err= 0: pid=1494376: Wed May 15 15:54:44 2024 00:36:33.281 read: IOPS=463, BW=1854KiB/s (1898kB/s)(18.1MiB/10012msec) 00:36:33.281 slat (nsec): min=9556, max=95553, avg=38541.01, stdev=12885.38 00:36:33.281 clat (usec): min=14599, max=65915, avg=34177.65, stdev=2179.29 00:36:33.281 lat (usec): min=14617, max=65944, avg=34216.19, stdev=2179.09 00:36:33.281 clat percentiles (usec): 00:36:33.281 | 1.00th=[32900], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:36:33.281 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:36:33.281 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:36:33.281 | 99.00th=[41681], 99.50th=[44303], 99.90th=[54264], 99.95th=[54264], 00:36:33.281 | 99.99th=[65799] 00:36:33.281 bw ( KiB/s): min= 1648, max= 1920, per=4.16%, avg=1849.60, stdev=78.97, samples=20 00:36:33.281 iops : min= 412, max= 480, avg=462.40, stdev=19.74, samples=20 00:36:33.281 lat (msec) : 20=0.34%, 50=99.31%, 100=0.34% 00:36:33.281 cpu : usr=96.17%, sys=2.17%, ctx=71, majf=0, minf=14 00:36:33.281 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:33.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.281 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.281 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:33.281 filename0: (groupid=0, jobs=1): err= 0: pid=1494377: Wed May 15 15:54:44 2024 00:36:33.281 read: IOPS=461, BW=1847KiB/s (1892kB/s)(18.1MiB/10012msec) 00:36:33.281 slat (usec): min=8, max=105, avg=35.11, stdev=21.43 00:36:33.281 clat (usec): min=25485, max=64272, avg=34345.39, stdev=2116.31 00:36:33.281 lat (usec): min=25508, max=64317, avg=34380.50, stdev=2114.95 00:36:33.281 clat percentiles (usec): 00:36:33.281 | 1.00th=[32900], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:36:33.281 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:36:33.281 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[36439], 00:36:33.281 | 99.00th=[40109], 99.50th=[45351], 99.90th=[64226], 99.95th=[64226], 00:36:33.281 | 99.99th=[64226] 00:36:33.281 bw ( KiB/s): min= 1536, max= 1920, per=4.16%, avg=1845.89, stdev=98.37, samples=19 00:36:33.281 iops : min= 384, max= 480, avg=461.47, stdev=24.59, samples=19 00:36:33.281 lat (msec) : 50=99.65%, 100=0.35% 00:36:33.281 cpu : usr=98.34%, sys=1.25%, ctx=12, majf=0, minf=21 00:36:33.281 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:33.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.281 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.281 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:33.281 filename0: (groupid=0, jobs=1): err= 0: pid=1494378: Wed May 15 15:54:44 2024 00:36:33.281 read: IOPS=462, BW=1850KiB/s (1894kB/s)(18.1MiB/10011msec) 00:36:33.281 slat (usec): min=8, max=134, avg=28.61, stdev=13.38 00:36:33.281 clat (usec): min=12659, max=87295, avg=34378.03, stdev=2767.36 00:36:33.281 lat (usec): min=12668, max=87329, avg=34406.64, stdev=2768.10 00:36:33.281 clat percentiles (usec): 00:36:33.281 | 1.00th=[32375], 5.00th=[33817], 10.00th=[33817], 20.00th=[33817], 00:36:33.281 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:36:33.281 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:36:33.281 | 99.00th=[41157], 99.50th=[52691], 99.90th=[65274], 99.95th=[65274], 00:36:33.281 | 99.99th=[87557] 00:36:33.281 bw ( KiB/s): min= 1539, max= 1920, per=4.16%, avg=1845.75, stdev=89.56, samples=20 00:36:33.281 iops : min= 384, max= 480, avg=461.40, stdev=22.53, samples=20 00:36:33.281 lat (msec) : 20=0.35%, 50=99.14%, 100=0.52% 00:36:33.281 cpu : usr=94.96%, sys=3.01%, ctx=266, majf=0, minf=24 00:36:33.281 IO depths : 1=0.1%, 2=5.9%, 4=23.6%, 8=57.6%, 16=12.8%, 32=0.0%, >=64=0.0% 00:36:33.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.281 complete : 0=0.0%, 4=94.1%, 8=0.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.281 issued rwts: total=4630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:33.281 filename0: (groupid=0, jobs=1): err= 0: pid=1494379: Wed May 15 15:54:44 2024 00:36:33.281 read: IOPS=461, BW=1847KiB/s (1892kB/s)(18.1MiB/10012msec) 00:36:33.281 slat (usec): min=10, max=143, avg=41.09, stdev=20.09 00:36:33.281 clat (usec): min=27918, max=72771, avg=34276.06, stdev=2163.87 00:36:33.281 lat (usec): min=27931, max=72810, avg=34317.15, stdev=2162.58 00:36:33.281 clat percentiles (usec): 00:36:33.281 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:36:33.281 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:36:33.281 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:36:33.281 | 99.00th=[39060], 99.50th=[45351], 99.90th=[64226], 99.95th=[64226], 00:36:33.281 | 99.99th=[72877] 00:36:33.281 bw ( KiB/s): min= 1536, max= 1920, per=4.16%, avg=1845.89, stdev=98.37, samples=19 00:36:33.281 iops : min= 384, max= 480, avg=461.47, stdev=24.59, samples=19 00:36:33.281 lat (msec) : 50=99.65%, 100=0.35% 00:36:33.281 cpu : usr=94.42%, sys=3.34%, ctx=489, majf=0, minf=25 00:36:33.281 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:33.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.281 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.281 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:33.281 filename0: (groupid=0, jobs=1): err= 0: pid=1494380: Wed May 15 15:54:44 2024 00:36:33.281 read: IOPS=463, BW=1853KiB/s (1898kB/s)(18.1MiB/10014msec) 00:36:33.281 slat (usec): min=9, max=125, avg=53.20, stdev=24.13 00:36:33.281 clat (usec): min=22772, max=44443, avg=34073.10, stdev=1502.28 00:36:33.281 lat (usec): min=22782, max=44461, avg=34126.30, stdev=1499.17 00:36:33.281 clat percentiles (usec): 00:36:33.281 | 1.00th=[32375], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:36:33.282 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:33.282 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:36:33.282 | 99.00th=[40633], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:36:33.282 | 99.99th=[44303] 00:36:33.282 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1849.60, stdev=77.42, samples=20 00:36:33.282 iops : min= 416, max= 480, avg=462.40, stdev=19.35, samples=20 00:36:33.282 lat (msec) : 50=100.00% 00:36:33.282 cpu : usr=95.89%, sys=2.54%, ctx=84, majf=0, minf=30 00:36:33.282 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:33.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.282 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.282 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:33.282 filename0: (groupid=0, jobs=1): err= 0: pid=1494381: Wed May 15 15:54:44 2024 00:36:33.282 read: IOPS=461, BW=1848KiB/s (1892kB/s)(18.1MiB/10007msec) 00:36:33.282 slat (usec): min=8, max=111, avg=37.21, stdev=16.00 00:36:33.282 clat (usec): min=14695, max=78501, avg=34354.12, stdev=3188.35 00:36:33.282 lat (usec): min=14704, max=78523, avg=34391.33, stdev=3187.13 00:36:33.282 clat percentiles (usec): 00:36:33.282 | 1.00th=[32900], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:36:33.282 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:36:33.282 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[36439], 00:36:33.282 | 99.00th=[43779], 99.50th=[44303], 99.90th=[78119], 99.95th=[78119], 00:36:33.282 | 99.99th=[78119] 00:36:33.282 bw ( KiB/s): min= 1520, max= 1920, per=4.14%, avg=1839.16, stdev=93.67, samples=19 00:36:33.282 iops : min= 380, max= 480, avg=459.79, stdev=23.42, samples=19 00:36:33.282 lat (msec) : 20=0.35%, 50=99.26%, 100=0.39% 00:36:33.282 cpu : usr=98.19%, sys=1.38%, ctx=22, majf=0, minf=29 00:36:33.282 IO depths : 1=0.2%, 2=6.5%, 4=24.9%, 8=56.1%, 16=12.2%, 32=0.0%, >=64=0.0% 00:36:33.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.282 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.282 issued rwts: total=4622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:33.282 filename0: (groupid=0, jobs=1): err= 0: pid=1494382: Wed May 15 15:54:44 2024 00:36:33.282 read: IOPS=466, BW=1865KiB/s (1909kB/s)(18.2MiB/10023msec) 00:36:33.282 slat (usec): min=8, max=205, avg=21.55, stdev=14.63 00:36:33.282 clat (usec): min=10013, max=40889, avg=34133.44, stdev=2081.72 00:36:33.282 lat (usec): min=10025, max=40906, avg=34154.99, stdev=2080.32 00:36:33.282 clat percentiles (usec): 00:36:33.282 | 1.00th=[26084], 5.00th=[33817], 10.00th=[33817], 20.00th=[33817], 00:36:33.282 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:36:33.282 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:36:33.282 | 99.00th=[39060], 99.50th=[39584], 99.90th=[40633], 99.95th=[40633], 00:36:33.282 | 99.99th=[40633] 00:36:33.282 bw ( KiB/s): min= 1792, max= 1920, per=4.19%, avg=1862.40, stdev=65.33, samples=20 00:36:33.282 iops : min= 448, max= 480, avg=465.60, stdev=16.33, samples=20 00:36:33.282 lat (msec) : 20=0.68%, 50=99.32% 00:36:33.282 cpu : usr=95.30%, sys=2.90%, ctx=110, majf=0, minf=23 00:36:33.282 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:33.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.282 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.282 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:33.282 filename1: (groupid=0, jobs=1): err= 0: pid=1494383: Wed May 15 15:54:44 2024 00:36:33.282 read: IOPS=463, BW=1854KiB/s (1899kB/s)(18.1MiB/10009msec) 00:36:33.282 slat (usec): min=8, max=150, avg=51.83, stdev=29.24 00:36:33.282 clat (usec): min=9919, max=63316, avg=34047.69, stdev=2654.76 00:36:33.282 lat (usec): min=9937, max=63351, avg=34099.52, stdev=2652.63 00:36:33.282 clat percentiles (usec): 00:36:33.282 | 1.00th=[32113], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:36:33.282 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:36:33.282 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[35914], 00:36:33.282 | 99.00th=[41157], 99.50th=[44827], 99.90th=[63177], 99.95th=[63177], 00:36:33.282 | 99.99th=[63177] 00:36:33.282 bw ( KiB/s): min= 1536, max= 1920, per=4.16%, avg=1849.60, stdev=97.17, samples=20 00:36:33.282 iops : min= 384, max= 480, avg=462.40, stdev=24.29, samples=20 00:36:33.282 lat (msec) : 10=0.11%, 20=0.58%, 50=98.97%, 100=0.34% 00:36:33.282 cpu : usr=98.01%, sys=1.57%, ctx=17, majf=0, minf=26 00:36:33.282 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:33.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.282 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.282 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:33.282 filename1: (groupid=0, jobs=1): err= 0: pid=1494384: Wed May 15 15:54:44 2024 00:36:33.282 read: IOPS=464, BW=1860KiB/s (1905kB/s)(18.2MiB/10013msec) 00:36:33.282 slat (usec): min=8, max=118, avg=28.68, stdev=23.43 00:36:33.282 clat (usec): min=12506, max=50266, avg=34161.99, stdev=2022.26 00:36:33.282 lat (usec): min=12516, max=50291, avg=34190.67, stdev=2021.38 00:36:33.282 clat percentiles (usec): 00:36:33.282 | 1.00th=[31327], 5.00th=[33162], 10.00th=[33817], 20.00th=[33817], 00:36:33.282 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:36:33.282 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:36:33.282 | 99.00th=[40633], 99.50th=[41681], 99.90th=[44303], 99.95th=[44303], 00:36:33.282 | 99.99th=[50070] 00:36:33.282 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1856.00, stdev=65.66, samples=20 00:36:33.282 iops : min= 448, max= 480, avg=464.00, stdev=16.42, samples=20 00:36:33.282 lat (msec) : 20=0.69%, 50=99.27%, 100=0.04% 00:36:33.282 cpu : usr=98.12%, sys=1.49%, ctx=18, majf=0, minf=27 00:36:33.282 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:33.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.282 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.282 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:33.282 filename1: (groupid=0, jobs=1): err= 0: pid=1494385: Wed May 15 15:54:44 2024 00:36:33.282 read: IOPS=465, BW=1860KiB/s (1905kB/s)(18.2MiB/10021msec) 00:36:33.282 slat (nsec): min=8463, max=97905, avg=21134.72, stdev=13223.28 00:36:33.282 clat (usec): min=15549, max=49208, avg=34247.59, stdev=1837.92 00:36:33.282 lat (usec): min=15603, max=49264, avg=34268.73, stdev=1837.37 00:36:33.282 clat percentiles (usec): 00:36:33.282 | 1.00th=[24249], 5.00th=[33817], 10.00th=[33817], 20.00th=[33817], 00:36:33.282 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:36:33.282 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:36:33.282 | 99.00th=[40633], 99.50th=[45351], 99.90th=[45876], 99.95th=[47973], 00:36:33.282 | 99.99th=[49021] 00:36:33.282 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1855.16, stdev=52.11, samples=19 00:36:33.282 iops : min= 448, max= 480, avg=463.79, stdev=13.03, samples=19 00:36:33.282 lat (msec) : 20=0.17%, 50=99.83% 00:36:33.282 cpu : usr=98.14%, sys=1.45%, ctx=20, majf=0, minf=26 00:36:33.282 IO depths : 1=0.3%, 2=6.4%, 4=24.6%, 8=56.5%, 16=12.3%, 32=0.0%, >=64=0.0% 00:36:33.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.282 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.282 issued rwts: total=4660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:33.282 filename1: (groupid=0, jobs=1): err= 0: pid=1494386: Wed May 15 15:54:44 2024 00:36:33.282 read: IOPS=462, BW=1849KiB/s (1894kB/s)(18.1MiB/10002msec) 00:36:33.282 slat (nsec): min=8705, max=99785, avg=32127.15, stdev=13940.78 00:36:33.282 clat (usec): min=18870, max=53783, avg=34320.50, stdev=1637.91 00:36:33.282 lat (usec): min=18883, max=53825, avg=34352.62, stdev=1637.65 00:36:33.282 clat percentiles (usec): 00:36:33.282 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:36:33.282 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:36:33.282 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[36439], 00:36:33.282 | 99.00th=[41157], 99.50th=[47449], 99.90th=[49021], 99.95th=[50070], 00:36:33.282 | 99.99th=[53740] 00:36:33.282 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1845.89, stdev=77.69, samples=19 00:36:33.282 iops : min= 416, max= 480, avg=461.47, stdev=19.42, samples=19 00:36:33.282 lat (msec) : 20=0.13%, 50=99.83%, 100=0.04% 00:36:33.282 cpu : usr=98.32%, sys=1.31%, ctx=15, majf=0, minf=17 00:36:33.282 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:33.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.282 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.282 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:33.282 filename1: (groupid=0, jobs=1): err= 0: pid=1494387: Wed May 15 15:54:44 2024 00:36:33.282 read: IOPS=462, BW=1848KiB/s (1893kB/s)(18.1MiB/10007msec) 00:36:33.282 slat (usec): min=8, max=213, avg=46.25, stdev=26.38 00:36:33.283 clat (usec): min=15295, max=78196, avg=34193.28, stdev=3147.94 00:36:33.283 lat (usec): min=15308, max=78228, avg=34239.53, stdev=3146.70 00:36:33.283 clat percentiles (usec): 00:36:33.283 | 1.00th=[32375], 5.00th=[32900], 10.00th=[33424], 20.00th=[33817], 00:36:33.283 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:33.283 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:36:33.283 | 99.00th=[40109], 99.50th=[44827], 99.90th=[78119], 99.95th=[78119], 00:36:33.283 | 99.99th=[78119] 00:36:33.283 bw ( KiB/s): min= 1536, max= 1920, per=4.14%, avg=1839.16, stdev=97.39, samples=19 00:36:33.283 iops : min= 384, max= 480, avg=459.79, stdev=24.35, samples=19 00:36:33.283 lat (msec) : 20=0.52%, 50=99.13%, 100=0.35% 00:36:33.283 cpu : usr=96.84%, sys=1.79%, ctx=160, majf=0, minf=24 00:36:33.283 IO depths : 1=5.9%, 2=12.1%, 4=24.9%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:33.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.283 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.283 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.283 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:33.283 filename1: (groupid=0, jobs=1): err= 0: pid=1494388: Wed May 15 15:54:44 2024 00:36:33.283 read: IOPS=463, BW=1853KiB/s (1897kB/s)(18.1MiB/10018msec) 00:36:33.283 slat (usec): min=9, max=132, avg=40.76, stdev=21.35 00:36:33.283 clat (usec): min=16690, max=49626, avg=34230.52, stdev=1640.18 00:36:33.283 lat (usec): min=16748, max=49671, avg=34271.28, stdev=1637.19 00:36:33.283 clat percentiles (usec): 00:36:33.283 | 1.00th=[32900], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:36:33.283 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:36:33.283 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:36:33.283 | 99.00th=[40633], 99.50th=[42206], 99.90th=[45351], 99.95th=[45876], 00:36:33.283 | 99.99th=[49546] 00:36:33.283 bw ( KiB/s): min= 1664, max= 1920, per=4.17%, avg=1852.63, stdev=73.05, samples=19 00:36:33.283 iops : min= 416, max= 480, avg=463.16, stdev=18.26, samples=19 00:36:33.283 lat (msec) : 20=0.34%, 50=99.66% 00:36:33.283 cpu : usr=96.71%, sys=1.95%, ctx=108, majf=0, minf=22 00:36:33.283 IO depths : 1=0.2%, 2=6.4%, 4=25.0%, 8=56.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:36:33.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.283 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.283 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.283 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:33.283 filename1: (groupid=0, jobs=1): err= 0: pid=1494389: Wed May 15 15:54:44 2024 00:36:33.283 read: IOPS=462, BW=1848KiB/s (1893kB/s)(18.1MiB/10007msec) 00:36:33.283 slat (usec): min=8, max=109, avg=34.31, stdev=11.76 00:36:33.283 clat (usec): min=22527, max=59109, avg=34296.10, stdev=1976.08 00:36:33.283 lat (usec): min=22543, max=59210, avg=34330.41, stdev=1978.60 00:36:33.283 clat percentiles (usec): 00:36:33.283 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:36:33.283 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:36:33.283 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:36:33.283 | 99.00th=[41681], 99.50th=[44303], 99.90th=[58983], 99.95th=[58983], 00:36:33.283 | 99.99th=[58983] 00:36:33.283 bw ( KiB/s): min= 1536, max= 1920, per=4.16%, avg=1845.89, stdev=98.37, samples=19 00:36:33.283 iops : min= 384, max= 480, avg=461.47, stdev=24.59, samples=19 00:36:33.283 lat (msec) : 50=99.65%, 100=0.35% 00:36:33.283 cpu : usr=97.98%, sys=1.60%, ctx=24, majf=0, minf=18 00:36:33.283 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:33.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.283 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.283 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.283 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:33.283 filename1: (groupid=0, jobs=1): err= 0: pid=1494390: Wed May 15 15:54:44 2024 00:36:33.283 read: IOPS=462, BW=1849KiB/s (1894kB/s)(18.1MiB/10002msec) 00:36:33.283 slat (usec): min=9, max=104, avg=39.44, stdev=19.61 00:36:33.283 clat (usec): min=32215, max=47557, avg=34264.79, stdev=1376.68 00:36:33.283 lat (usec): min=32255, max=47596, avg=34304.23, stdev=1374.92 00:36:33.283 clat percentiles (usec): 00:36:33.283 | 1.00th=[32900], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:36:33.283 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:36:33.283 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:36:33.283 | 99.00th=[40633], 99.50th=[45351], 99.90th=[47449], 99.95th=[47449], 00:36:33.283 | 99.99th=[47449] 00:36:33.283 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1845.89, stdev=77.69, samples=19 00:36:33.283 iops : min= 416, max= 480, avg=461.47, stdev=19.42, samples=19 00:36:33.283 lat (msec) : 50=100.00% 00:36:33.283 cpu : usr=97.47%, sys=1.97%, ctx=101, majf=0, minf=27 00:36:33.283 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:33.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.283 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.283 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.283 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:33.283 filename2: (groupid=0, jobs=1): err= 0: pid=1494391: Wed May 15 15:54:44 2024 00:36:33.283 read: IOPS=462, BW=1849KiB/s (1893kB/s)(18.1MiB/10003msec) 00:36:33.283 slat (usec): min=8, max=107, avg=40.89, stdev=21.36 00:36:33.283 clat (usec): min=15222, max=78195, avg=34231.28, stdev=3030.98 00:36:33.283 lat (usec): min=15236, max=78233, avg=34272.17, stdev=3030.73 00:36:33.283 clat percentiles (usec): 00:36:33.283 | 1.00th=[32637], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:36:33.283 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:33.283 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:36:33.283 | 99.00th=[39060], 99.50th=[45351], 99.90th=[78119], 99.95th=[78119], 00:36:33.283 | 99.99th=[78119] 00:36:33.283 bw ( KiB/s): min= 1536, max= 1920, per=4.14%, avg=1839.16, stdev=97.39, samples=19 00:36:33.283 iops : min= 384, max= 480, avg=459.79, stdev=24.35, samples=19 00:36:33.283 lat (msec) : 20=0.35%, 50=99.31%, 100=0.35% 00:36:33.283 cpu : usr=98.36%, sys=1.25%, ctx=13, majf=0, minf=20 00:36:33.283 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:33.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.283 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.283 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.283 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:33.283 filename2: (groupid=0, jobs=1): err= 0: pid=1494392: Wed May 15 15:54:44 2024 00:36:33.283 read: IOPS=461, BW=1847KiB/s (1891kB/s)(18.1MiB/10016msec) 00:36:33.283 slat (nsec): min=8682, max=84618, avg=35960.27, stdev=10911.50 00:36:33.283 clat (usec): min=22573, max=84988, avg=34343.26, stdev=2723.98 00:36:33.283 lat (usec): min=22612, max=85017, avg=34379.23, stdev=2723.49 00:36:33.283 clat percentiles (usec): 00:36:33.283 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:36:33.283 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:36:33.283 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:36:33.283 | 99.00th=[41681], 99.50th=[44303], 99.90th=[72877], 99.95th=[72877], 00:36:33.283 | 99.99th=[85459] 00:36:33.283 bw ( KiB/s): min= 1536, max= 1920, per=4.14%, avg=1839.16, stdev=97.39, samples=19 00:36:33.283 iops : min= 384, max= 480, avg=459.79, stdev=24.35, samples=19 00:36:33.283 lat (msec) : 50=99.65%, 100=0.35% 00:36:33.283 cpu : usr=98.36%, sys=1.25%, ctx=13, majf=0, minf=18 00:36:33.283 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:33.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.283 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.283 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.283 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:33.283 filename2: (groupid=0, jobs=1): err= 0: pid=1494393: Wed May 15 15:54:44 2024 00:36:33.283 read: IOPS=462, BW=1848KiB/s (1893kB/s)(18.1MiB/10007msec) 00:36:33.284 slat (usec): min=8, max=101, avg=34.45, stdev=15.67 00:36:33.284 clat (usec): min=15294, max=78607, avg=34300.88, stdev=3028.32 00:36:33.284 lat (usec): min=15317, max=78631, avg=34335.33, stdev=3027.03 00:36:33.284 clat percentiles (usec): 00:36:33.284 | 1.00th=[32637], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:36:33.284 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:36:33.284 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:36:33.284 | 99.00th=[40109], 99.50th=[45351], 99.90th=[78119], 99.95th=[78119], 00:36:33.284 | 99.99th=[78119] 00:36:33.284 bw ( KiB/s): min= 1539, max= 1976, per=4.16%, avg=1846.15, stdev=99.12, samples=20 00:36:33.284 iops : min= 384, max= 494, avg=461.50, stdev=24.90, samples=20 00:36:33.284 lat (msec) : 20=0.35%, 50=99.31%, 100=0.35% 00:36:33.284 cpu : usr=98.26%, sys=1.36%, ctx=14, majf=0, minf=21 00:36:33.284 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:33.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.284 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.284 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:33.284 filename2: (groupid=0, jobs=1): err= 0: pid=1494394: Wed May 15 15:54:44 2024 00:36:33.284 read: IOPS=461, BW=1847KiB/s (1891kB/s)(18.1MiB/10016msec) 00:36:33.284 slat (usec): min=9, max=131, avg=45.11, stdev=21.41 00:36:33.284 clat (usec): min=22514, max=73229, avg=34259.34, stdev=2651.19 00:36:33.284 lat (usec): min=22561, max=73269, avg=34304.45, stdev=2648.85 00:36:33.284 clat percentiles (usec): 00:36:33.284 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:36:33.284 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:33.284 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:36:33.284 | 99.00th=[41681], 99.50th=[44303], 99.90th=[72877], 99.95th=[72877], 00:36:33.284 | 99.99th=[72877] 00:36:33.284 bw ( KiB/s): min= 1536, max= 1920, per=4.14%, avg=1839.16, stdev=97.39, samples=19 00:36:33.284 iops : min= 384, max= 480, avg=459.79, stdev=24.35, samples=19 00:36:33.284 lat (msec) : 50=99.65%, 100=0.35% 00:36:33.284 cpu : usr=93.06%, sys=3.67%, ctx=341, majf=0, minf=15 00:36:33.284 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:33.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.284 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.284 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:33.284 filename2: (groupid=0, jobs=1): err= 0: pid=1494395: Wed May 15 15:54:44 2024 00:36:33.284 read: IOPS=463, BW=1853KiB/s (1898kB/s)(18.1MiB/10014msec) 00:36:33.284 slat (nsec): min=8607, max=92362, avg=34605.55, stdev=15028.37 00:36:33.284 clat (usec): min=17905, max=44326, avg=34243.21, stdev=1380.94 00:36:33.284 lat (usec): min=17950, max=44349, avg=34277.81, stdev=1380.67 00:36:33.284 clat percentiles (usec): 00:36:33.284 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:36:33.284 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:36:33.284 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:36:33.284 | 99.00th=[40633], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:36:33.284 | 99.99th=[44303] 00:36:33.284 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1849.60, stdev=77.42, samples=20 00:36:33.284 iops : min= 416, max= 480, avg=462.40, stdev=19.35, samples=20 00:36:33.284 lat (msec) : 20=0.04%, 50=99.96% 00:36:33.284 cpu : usr=98.38%, sys=1.22%, ctx=28, majf=0, minf=24 00:36:33.284 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:33.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.284 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.284 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:33.284 filename2: (groupid=0, jobs=1): err= 0: pid=1494396: Wed May 15 15:54:44 2024 00:36:33.284 read: IOPS=466, BW=1867KiB/s (1912kB/s)(18.2MiB/10004msec) 00:36:33.284 slat (nsec): min=8474, max=84445, avg=24533.10, stdev=13892.53 00:36:33.284 clat (usec): min=8481, max=45277, avg=34072.26, stdev=2513.36 00:36:33.284 lat (usec): min=8506, max=45298, avg=34096.79, stdev=2512.71 00:36:33.284 clat percentiles (usec): 00:36:33.284 | 1.00th=[22152], 5.00th=[33817], 10.00th=[33817], 20.00th=[33817], 00:36:33.284 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:36:33.284 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:36:33.284 | 99.00th=[39060], 99.50th=[40633], 99.90th=[45351], 99.95th=[45351], 00:36:33.284 | 99.99th=[45351] 00:36:33.284 bw ( KiB/s): min= 1792, max= 2024, per=4.20%, avg=1864.84, stdev=74.71, samples=19 00:36:33.284 iops : min= 448, max= 506, avg=466.21, stdev=18.68, samples=19 00:36:33.284 lat (msec) : 10=0.34%, 20=0.56%, 50=99.10% 00:36:33.284 cpu : usr=94.47%, sys=3.07%, ctx=114, majf=0, minf=29 00:36:33.284 IO depths : 1=6.1%, 2=12.3%, 4=24.6%, 8=50.7%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:33.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.284 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.284 issued rwts: total=4669,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:33.284 filename2: (groupid=0, jobs=1): err= 0: pid=1494397: Wed May 15 15:54:44 2024 00:36:33.284 read: IOPS=463, BW=1854KiB/s (1899kB/s)(18.1MiB/10009msec) 00:36:33.284 slat (usec): min=8, max=169, avg=53.71, stdev=23.80 00:36:33.284 clat (usec): min=9890, max=63153, avg=34024.73, stdev=2628.88 00:36:33.284 lat (usec): min=9903, max=63190, avg=34078.43, stdev=2628.52 00:36:33.284 clat percentiles (usec): 00:36:33.284 | 1.00th=[32637], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:36:33.284 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:33.284 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:36:33.284 | 99.00th=[40109], 99.50th=[44827], 99.90th=[63177], 99.95th=[63177], 00:36:33.284 | 99.99th=[63177] 00:36:33.284 bw ( KiB/s): min= 1536, max= 1920, per=4.16%, avg=1849.60, stdev=97.17, samples=20 00:36:33.284 iops : min= 384, max= 480, avg=462.40, stdev=24.29, samples=20 00:36:33.284 lat (msec) : 10=0.15%, 20=0.54%, 50=98.97%, 100=0.34% 00:36:33.284 cpu : usr=95.11%, sys=2.80%, ctx=283, majf=0, minf=16 00:36:33.284 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:33.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.284 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.284 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:33.284 filename2: (groupid=0, jobs=1): err= 0: pid=1494398: Wed May 15 15:54:44 2024 00:36:33.284 read: IOPS=463, BW=1854KiB/s (1898kB/s)(18.1MiB/10011msec) 00:36:33.284 slat (usec): min=7, max=123, avg=48.98, stdev=20.87 00:36:33.284 clat (usec): min=14675, max=53407, avg=34073.97, stdev=2073.76 00:36:33.284 lat (usec): min=14716, max=53438, avg=34122.94, stdev=2073.23 00:36:33.284 clat percentiles (usec): 00:36:33.284 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:36:33.284 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:33.284 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[35914], 00:36:33.284 | 99.00th=[41681], 99.50th=[44303], 99.90th=[53216], 99.95th=[53216], 00:36:33.284 | 99.99th=[53216] 00:36:33.284 bw ( KiB/s): min= 1667, max= 1920, per=4.16%, avg=1849.75, stdev=77.04, samples=20 00:36:33.284 iops : min= 416, max= 480, avg=462.40, stdev=19.35, samples=20 00:36:33.284 lat (msec) : 20=0.34%, 50=99.31%, 100=0.34% 00:36:33.284 cpu : usr=98.11%, sys=1.49%, ctx=17, majf=0, minf=17 00:36:33.285 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:33.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.285 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.285 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.285 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:33.285 00:36:33.285 Run status group 0 (all jobs): 00:36:33.285 READ: bw=43.4MiB/s (45.5MB/s), 1847KiB/s-1867KiB/s (1891kB/s-1912kB/s), io=435MiB (456MB), run=10002-10023msec 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.285 bdev_null0 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.285 [2024-05-15 15:54:44.977184] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.285 bdev_null1 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.285 15:54:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.285 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.285 15:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:33.285 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.285 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.285 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.285 15:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:33.285 15:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:33.285 15:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:33.285 15:54:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:33.285 15:54:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:33.285 15:54:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:33.285 15:54:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:33.285 { 00:36:33.285 "params": { 00:36:33.285 "name": "Nvme$subsystem", 00:36:33.285 "trtype": "$TEST_TRANSPORT", 00:36:33.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:33.285 "adrfam": "ipv4", 00:36:33.285 "trsvcid": "$NVMF_PORT", 00:36:33.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:33.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:33.285 "hdgst": ${hdgst:-false}, 00:36:33.285 "ddgst": ${ddgst:-false} 00:36:33.285 }, 00:36:33.285 "method": "bdev_nvme_attach_controller" 00:36:33.285 } 00:36:33.285 EOF 00:36:33.285 )") 00:36:33.285 15:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:33.285 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:33.285 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:33.286 { 00:36:33.286 "params": { 00:36:33.286 "name": "Nvme$subsystem", 00:36:33.286 "trtype": "$TEST_TRANSPORT", 00:36:33.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:33.286 "adrfam": "ipv4", 00:36:33.286 "trsvcid": "$NVMF_PORT", 00:36:33.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:33.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:33.286 "hdgst": ${hdgst:-false}, 00:36:33.286 "ddgst": ${ddgst:-false} 00:36:33.286 }, 00:36:33.286 "method": "bdev_nvme_attach_controller" 00:36:33.286 } 00:36:33.286 EOF 00:36:33.286 )") 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:33.286 "params": { 00:36:33.286 "name": "Nvme0", 00:36:33.286 "trtype": "tcp", 00:36:33.286 "traddr": "10.0.0.2", 00:36:33.286 "adrfam": "ipv4", 00:36:33.286 "trsvcid": "4420", 00:36:33.286 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:33.286 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:33.286 "hdgst": false, 00:36:33.286 "ddgst": false 00:36:33.286 }, 00:36:33.286 "method": "bdev_nvme_attach_controller" 00:36:33.286 },{ 00:36:33.286 "params": { 00:36:33.286 "name": "Nvme1", 00:36:33.286 "trtype": "tcp", 00:36:33.286 "traddr": "10.0.0.2", 00:36:33.286 "adrfam": "ipv4", 00:36:33.286 "trsvcid": "4420", 00:36:33.286 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:33.286 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:33.286 "hdgst": false, 00:36:33.286 "ddgst": false 00:36:33.286 }, 00:36:33.286 "method": "bdev_nvme_attach_controller" 00:36:33.286 }' 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:33.286 15:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:33.286 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:33.286 ... 00:36:33.286 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:33.286 ... 00:36:33.286 fio-3.35 00:36:33.286 Starting 4 threads 00:36:33.286 EAL: No free 2048 kB hugepages reported on node 1 00:36:38.551 00:36:38.551 filename0: (groupid=0, jobs=1): err= 0: pid=1495774: Wed May 15 15:54:51 2024 00:36:38.551 read: IOPS=1878, BW=14.7MiB/s (15.4MB/s)(73.4MiB/5003msec) 00:36:38.551 slat (nsec): min=4502, max=42279, avg=13483.97, stdev=3798.48 00:36:38.551 clat (usec): min=934, max=9792, avg=4213.41, stdev=686.16 00:36:38.551 lat (usec): min=954, max=9810, avg=4226.89, stdev=686.22 00:36:38.551 clat percentiles (usec): 00:36:38.551 | 1.00th=[ 2442], 5.00th=[ 3261], 10.00th=[ 3556], 20.00th=[ 3884], 00:36:38.551 | 30.00th=[ 4080], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4228], 00:36:38.551 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4752], 95.00th=[ 5473], 00:36:38.551 | 99.00th=[ 6915], 99.50th=[ 7177], 99.90th=[ 8029], 99.95th=[ 8717], 00:36:38.551 | 99.99th=[ 9765] 00:36:38.551 bw ( KiB/s): min=14476, max=15648, per=25.14%, avg=15025.20, stdev=322.39, samples=10 00:36:38.551 iops : min= 1809, max= 1956, avg=1878.10, stdev=40.39, samples=10 00:36:38.551 lat (usec) : 1000=0.01% 00:36:38.551 lat (msec) : 2=0.45%, 4=25.36%, 10=74.18% 00:36:38.551 cpu : usr=93.38%, sys=6.10%, ctx=14, majf=0, minf=9 00:36:38.551 IO depths : 1=0.1%, 2=13.4%, 4=57.9%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:38.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:38.551 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:38.551 issued rwts: total=9397,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:38.551 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:38.551 filename0: (groupid=0, jobs=1): err= 0: pid=1495775: Wed May 15 15:54:51 2024 00:36:38.551 read: IOPS=1844, BW=14.4MiB/s (15.1MB/s)(72.1MiB/5002msec) 00:36:38.551 slat (nsec): min=4414, max=41620, avg=13838.53, stdev=4187.18 00:36:38.551 clat (usec): min=855, max=7610, avg=4289.74, stdev=698.22 00:36:38.551 lat (usec): min=869, max=7627, avg=4303.58, stdev=698.03 00:36:38.551 clat percentiles (usec): 00:36:38.551 | 1.00th=[ 2442], 5.00th=[ 3359], 10.00th=[ 3720], 20.00th=[ 3982], 00:36:38.551 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:36:38.551 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 5080], 95.00th=[ 5669], 00:36:38.551 | 99.00th=[ 6849], 99.50th=[ 7177], 99.90th=[ 7439], 99.95th=[ 7570], 00:36:38.551 | 99.99th=[ 7635] 00:36:38.551 bw ( KiB/s): min=14128, max=15520, per=24.67%, avg=14744.89, stdev=388.73, samples=9 00:36:38.551 iops : min= 1766, max= 1940, avg=1843.11, stdev=48.59, samples=9 00:36:38.551 lat (usec) : 1000=0.05% 00:36:38.551 lat (msec) : 2=0.46%, 4=20.35%, 10=79.14% 00:36:38.551 cpu : usr=92.10%, sys=6.78%, ctx=141, majf=0, minf=0 00:36:38.551 IO depths : 1=0.1%, 2=11.9%, 4=59.8%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:38.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:38.551 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:38.551 issued rwts: total=9225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:38.551 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:38.551 filename1: (groupid=0, jobs=1): err= 0: pid=1495776: Wed May 15 15:54:51 2024 00:36:38.551 read: IOPS=1851, BW=14.5MiB/s (15.2MB/s)(72.3MiB/5002msec) 00:36:38.551 slat (nsec): min=4437, max=33268, avg=12964.35, stdev=3649.45 00:36:38.551 clat (usec): min=855, max=7871, avg=4278.09, stdev=687.56 00:36:38.551 lat (usec): min=869, max=7886, avg=4291.05, stdev=687.54 00:36:38.551 clat percentiles (usec): 00:36:38.551 | 1.00th=[ 2638], 5.00th=[ 3294], 10.00th=[ 3654], 20.00th=[ 3982], 00:36:38.551 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:36:38.551 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 5080], 95.00th=[ 5604], 00:36:38.551 | 99.00th=[ 6783], 99.50th=[ 7177], 99.90th=[ 7504], 99.95th=[ 7635], 00:36:38.551 | 99.99th=[ 7898] 00:36:38.551 bw ( KiB/s): min=14400, max=15728, per=24.68%, avg=14753.78, stdev=414.72, samples=9 00:36:38.551 iops : min= 1800, max= 1966, avg=1844.22, stdev=51.84, samples=9 00:36:38.551 lat (usec) : 1000=0.02% 00:36:38.551 lat (msec) : 2=0.43%, 4=20.86%, 10=78.69% 00:36:38.551 cpu : usr=93.82%, sys=5.62%, ctx=11, majf=0, minf=0 00:36:38.551 IO depths : 1=0.1%, 2=12.5%, 4=58.2%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:38.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:38.551 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:38.551 issued rwts: total=9259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:38.551 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:38.551 filename1: (groupid=0, jobs=1): err= 0: pid=1495777: Wed May 15 15:54:51 2024 00:36:38.551 read: IOPS=1899, BW=14.8MiB/s (15.6MB/s)(74.3MiB/5004msec) 00:36:38.551 slat (nsec): min=4441, max=38504, avg=11979.78, stdev=3376.81 00:36:38.551 clat (usec): min=996, max=7776, avg=4172.04, stdev=606.28 00:36:38.551 lat (usec): min=1009, max=7791, avg=4184.02, stdev=606.32 00:36:38.551 clat percentiles (usec): 00:36:38.551 | 1.00th=[ 2704], 5.00th=[ 3294], 10.00th=[ 3556], 20.00th=[ 3851], 00:36:38.551 | 30.00th=[ 4080], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4228], 00:36:38.552 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 5080], 00:36:38.552 | 99.00th=[ 6783], 99.50th=[ 7177], 99.90th=[ 7635], 99.95th=[ 7635], 00:36:38.552 | 99.99th=[ 7767] 00:36:38.552 bw ( KiB/s): min=14608, max=15840, per=25.42%, avg=15196.80, stdev=443.71, samples=10 00:36:38.552 iops : min= 1826, max= 1980, avg=1899.60, stdev=55.46, samples=10 00:36:38.552 lat (usec) : 1000=0.01% 00:36:38.552 lat (msec) : 2=0.23%, 4=26.26%, 10=73.50% 00:36:38.552 cpu : usr=93.18%, sys=6.18%, ctx=15, majf=0, minf=9 00:36:38.552 IO depths : 1=0.2%, 2=9.3%, 4=62.9%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:38.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:38.552 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:38.552 issued rwts: total=9506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:38.552 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:38.552 00:36:38.552 Run status group 0 (all jobs): 00:36:38.552 READ: bw=58.4MiB/s (61.2MB/s), 14.4MiB/s-14.8MiB/s (15.1MB/s-15.6MB/s), io=292MiB (306MB), run=5002-5004msec 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.552 00:36:38.552 real 0m24.208s 00:36:38.552 user 4m29.894s 00:36:38.552 sys 0m7.965s 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:38.552 15:54:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:38.552 ************************************ 00:36:38.552 END TEST fio_dif_rand_params 00:36:38.552 ************************************ 00:36:38.552 15:54:51 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:38.552 15:54:51 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:38.552 15:54:51 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:38.552 15:54:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:38.552 ************************************ 00:36:38.552 START TEST fio_dif_digest 00:36:38.552 ************************************ 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:38.552 bdev_null0 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:38.552 [2024-05-15 15:54:51.485971] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:38.552 { 00:36:38.552 "params": { 00:36:38.552 "name": "Nvme$subsystem", 00:36:38.552 "trtype": "$TEST_TRANSPORT", 00:36:38.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:38.552 "adrfam": "ipv4", 00:36:38.552 "trsvcid": "$NVMF_PORT", 00:36:38.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:38.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:38.552 "hdgst": ${hdgst:-false}, 00:36:38.552 "ddgst": ${ddgst:-false} 00:36:38.552 }, 00:36:38.552 "method": "bdev_nvme_attach_controller" 00:36:38.552 } 00:36:38.552 EOF 00:36:38.552 )") 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:38.552 "params": { 00:36:38.552 "name": "Nvme0", 00:36:38.552 "trtype": "tcp", 00:36:38.552 "traddr": "10.0.0.2", 00:36:38.552 "adrfam": "ipv4", 00:36:38.552 "trsvcid": "4420", 00:36:38.552 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:38.552 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:38.552 "hdgst": true, 00:36:38.552 "ddgst": true 00:36:38.552 }, 00:36:38.552 "method": "bdev_nvme_attach_controller" 00:36:38.552 }' 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:36:38.552 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:38.553 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:38.553 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:38.553 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:38.553 15:54:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:38.811 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:38.811 ... 00:36:38.811 fio-3.35 00:36:38.811 Starting 3 threads 00:36:38.811 EAL: No free 2048 kB hugepages reported on node 1 00:36:51.000 00:36:51.000 filename0: (groupid=0, jobs=1): err= 0: pid=1496528: Wed May 15 15:55:02 2024 00:36:51.000 read: IOPS=201, BW=25.2MiB/s (26.4MB/s)(253MiB/10046msec) 00:36:51.000 slat (nsec): min=5118, max=40308, avg=16238.97, stdev=4301.89 00:36:51.000 clat (usec): min=8898, max=52092, avg=14825.36, stdev=1705.27 00:36:51.000 lat (usec): min=8913, max=52116, avg=14841.60, stdev=1705.29 00:36:51.000 clat percentiles (usec): 00:36:51.000 | 1.00th=[10159], 5.00th=[12780], 10.00th=[13435], 20.00th=[13960], 00:36:51.000 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:36:51.000 | 70.00th=[15401], 80.00th=[15664], 90.00th=[16188], 95.00th=[16712], 00:36:51.000 | 99.00th=[17433], 99.50th=[17957], 99.90th=[22414], 99.95th=[50070], 00:36:51.000 | 99.99th=[52167] 00:36:51.000 bw ( KiB/s): min=25088, max=27392, per=34.50%, avg=25920.00, stdev=709.03, samples=20 00:36:51.000 iops : min= 196, max= 214, avg=202.50, stdev= 5.54, samples=20 00:36:51.000 lat (msec) : 10=0.79%, 20=98.96%, 50=0.20%, 100=0.05% 00:36:51.000 cpu : usr=88.45%, sys=8.78%, ctx=649, majf=0, minf=151 00:36:51.000 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.000 issued rwts: total=2027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.000 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:51.000 filename0: (groupid=0, jobs=1): err= 0: pid=1496529: Wed May 15 15:55:02 2024 00:36:51.000 read: IOPS=193, BW=24.2MiB/s (25.4MB/s)(242MiB/10006msec) 00:36:51.000 slat (nsec): min=4790, max=50007, avg=18874.56, stdev=4055.03 00:36:51.000 clat (usec): min=7064, max=58722, avg=15480.15, stdev=3050.07 00:36:51.000 lat (usec): min=7073, max=58748, avg=15499.02, stdev=3050.01 00:36:51.000 clat percentiles (usec): 00:36:51.000 | 1.00th=[12387], 5.00th=[13566], 10.00th=[13960], 20.00th=[14484], 00:36:51.000 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15270], 60.00th=[15533], 00:36:51.000 | 70.00th=[15795], 80.00th=[16188], 90.00th=[16712], 95.00th=[17171], 00:36:51.000 | 99.00th=[18220], 99.50th=[21365], 99.90th=[56886], 99.95th=[58983], 00:36:51.000 | 99.99th=[58983] 00:36:51.000 bw ( KiB/s): min=21504, max=25600, per=32.96%, avg=24757.55, stdev=1035.51, samples=20 00:36:51.000 iops : min= 168, max= 200, avg=193.40, stdev= 8.11, samples=20 00:36:51.000 lat (msec) : 10=0.41%, 20=98.97%, 50=0.15%, 100=0.46% 00:36:51.000 cpu : usr=89.58%, sys=9.06%, ctx=499, majf=0, minf=131 00:36:51.000 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.000 issued rwts: total=1936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.000 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:51.000 filename0: (groupid=0, jobs=1): err= 0: pid=1496530: Wed May 15 15:55:02 2024 00:36:51.000 read: IOPS=192, BW=24.1MiB/s (25.2MB/s)(242MiB/10045msec) 00:36:51.000 slat (nsec): min=4692, max=35677, avg=14349.16, stdev=2209.40 00:36:51.000 clat (usec): min=9142, max=58278, avg=15549.27, stdev=2348.87 00:36:51.000 lat (usec): min=9156, max=58292, avg=15563.62, stdev=2348.84 00:36:51.000 clat percentiles (usec): 00:36:51.000 | 1.00th=[11338], 5.00th=[13829], 10.00th=[14222], 20.00th=[14746], 00:36:51.000 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15401], 60.00th=[15664], 00:36:51.000 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16909], 95.00th=[17171], 00:36:51.000 | 99.00th=[17957], 99.50th=[19006], 99.90th=[57410], 99.95th=[58459], 00:36:51.000 | 99.99th=[58459] 00:36:51.000 bw ( KiB/s): min=22272, max=26112, per=32.90%, avg=24719.15, stdev=742.14, samples=20 00:36:51.000 iops : min= 174, max= 204, avg=193.10, stdev= 5.82, samples=20 00:36:51.000 lat (msec) : 10=0.05%, 20=99.48%, 50=0.26%, 100=0.21% 00:36:51.000 cpu : usr=92.36%, sys=7.11%, ctx=27, majf=0, minf=88 00:36:51.000 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.000 issued rwts: total=1933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.000 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:51.000 00:36:51.000 Run status group 0 (all jobs): 00:36:51.000 READ: bw=73.4MiB/s (76.9MB/s), 24.1MiB/s-25.2MiB/s (25.2MB/s-26.4MB/s), io=737MiB (773MB), run=10006-10046msec 00:36:51.000 15:55:02 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:51.000 15:55:02 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:51.000 15:55:02 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:51.000 15:55:02 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:51.000 15:55:02 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:51.000 15:55:02 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:51.000 15:55:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.000 15:55:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:51.001 15:55:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.001 15:55:02 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:51.001 15:55:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.001 15:55:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:51.001 15:55:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.001 00:36:51.001 real 0m10.987s 00:36:51.001 user 0m28.219s 00:36:51.001 sys 0m2.772s 00:36:51.001 15:55:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:51.001 15:55:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:51.001 ************************************ 00:36:51.001 END TEST fio_dif_digest 00:36:51.001 ************************************ 00:36:51.001 15:55:02 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:51.001 15:55:02 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:51.001 15:55:02 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:51.001 15:55:02 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:51.001 15:55:02 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:51.001 15:55:02 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:51.001 15:55:02 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:51.001 15:55:02 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:51.001 rmmod nvme_tcp 00:36:51.001 rmmod nvme_fabrics 00:36:51.001 rmmod nvme_keyring 00:36:51.001 15:55:02 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:51.001 15:55:02 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:51.001 15:55:02 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:51.001 15:55:02 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1490101 ']' 00:36:51.001 15:55:02 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1490101 00:36:51.001 15:55:02 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 1490101 ']' 00:36:51.001 15:55:02 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 1490101 00:36:51.001 15:55:02 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:36:51.001 15:55:02 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:51.001 15:55:02 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1490101 00:36:51.001 15:55:02 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:51.001 15:55:02 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:51.001 15:55:02 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1490101' 00:36:51.001 killing process with pid 1490101 00:36:51.001 15:55:02 nvmf_dif -- common/autotest_common.sh@965 -- # kill 1490101 00:36:51.001 [2024-05-15 15:55:02.545411] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:51.001 15:55:02 nvmf_dif -- common/autotest_common.sh@970 -- # wait 1490101 00:36:51.001 15:55:02 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:51.001 15:55:02 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:51.001 Waiting for block devices as requested 00:36:51.001 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:51.001 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:51.258 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:51.258 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:51.258 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:51.258 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:51.516 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:51.516 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:51.516 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:36:51.516 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:51.773 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:51.773 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:51.773 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:52.032 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:52.032 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:52.032 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:52.032 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:52.290 15:55:05 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:52.290 15:55:05 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:52.290 15:55:05 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:52.290 15:55:05 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:52.290 15:55:05 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:52.290 15:55:05 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:52.290 15:55:05 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:54.191 15:55:07 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:54.191 00:36:54.191 real 1m7.130s 00:36:54.191 user 6m24.469s 00:36:54.191 sys 0m21.061s 00:36:54.191 15:55:07 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:54.191 15:55:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:54.191 ************************************ 00:36:54.191 END TEST nvmf_dif 00:36:54.191 ************************************ 00:36:54.191 15:55:07 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:54.191 15:55:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:54.191 15:55:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:54.191 15:55:07 -- common/autotest_common.sh@10 -- # set +x 00:36:54.191 ************************************ 00:36:54.191 START TEST nvmf_abort_qd_sizes 00:36:54.191 ************************************ 00:36:54.191 15:55:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:54.447 * Looking for test storage... 00:36:54.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:54.447 15:55:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:54.447 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:54.447 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:54.447 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:54.447 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:54.447 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:54.447 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:54.447 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:54.448 15:55:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:56.974 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:36:56.974 Found 0000:09:00.0 (0x8086 - 0x159b) 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:36:56.975 Found 0000:09:00.1 (0x8086 - 0x159b) 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:36:56.975 Found net devices under 0000:09:00.0: cvl_0_0 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:36:56.975 Found net devices under 0000:09:00.1: cvl_0_1 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:56.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:56.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:36:56.975 00:36:56.975 --- 10.0.0.2 ping statistics --- 00:36:56.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:56.975 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:56.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:56.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:36:56.975 00:36:56.975 --- 10.0.0.1 ping statistics --- 00:36:56.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:56.975 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:56.975 15:55:09 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:58.350 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:58.350 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:58.350 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:58.350 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:58.350 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:58.350 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:58.350 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:58.350 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:58.350 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:58.350 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:58.350 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:58.350 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:58.350 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:58.350 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:58.350 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:58.350 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:59.293 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:36:59.293 15:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:59.293 15:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:59.293 15:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:59.293 15:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:59.293 15:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:59.293 15:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:59.293 15:55:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:59.293 15:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:59.293 15:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:59.293 15:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:59.293 15:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1501917 00:36:59.293 15:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:59.293 15:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1501917 00:36:59.293 15:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 1501917 ']' 00:36:59.293 15:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:59.293 15:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:59.293 15:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:59.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:59.293 15:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:59.293 15:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:59.589 [2024-05-15 15:55:12.406109] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:36:59.589 [2024-05-15 15:55:12.406182] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:59.589 EAL: No free 2048 kB hugepages reported on node 1 00:36:59.589 [2024-05-15 15:55:12.449722] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:59.589 [2024-05-15 15:55:12.487454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:59.589 [2024-05-15 15:55:12.576049] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:59.589 [2024-05-15 15:55:12.576110] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:59.589 [2024-05-15 15:55:12.576126] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:59.589 [2024-05-15 15:55:12.576140] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:59.589 [2024-05-15 15:55:12.576157] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:59.589 [2024-05-15 15:55:12.576249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:59.589 [2024-05-15 15:55:12.576294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:59.589 [2024-05-15 15:55:12.576379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:59.589 [2024-05-15 15:55:12.576382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:0b:00.0 ]] 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:0b:00.0 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:59.847 15:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:59.847 ************************************ 00:36:59.847 START TEST spdk_target_abort 00:36:59.847 ************************************ 00:36:59.847 15:55:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:36:59.847 15:55:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:59.847 15:55:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:36:59.847 15:55:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:59.847 15:55:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:03.173 spdk_targetn1 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:03.173 [2024-05-15 15:55:15.622550] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:03.173 [2024-05-15 15:55:15.654549] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:37:03.173 [2024-05-15 15:55:15.654855] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:03.173 15:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:03.173 EAL: No free 2048 kB hugepages reported on node 1 00:37:06.447 Initializing NVMe Controllers 00:37:06.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:06.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:06.447 Initialization complete. Launching workers. 00:37:06.447 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12627, failed: 0 00:37:06.447 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1198, failed to submit 11429 00:37:06.447 success 820, unsuccess 378, failed 0 00:37:06.447 15:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:06.447 15:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:06.447 EAL: No free 2048 kB hugepages reported on node 1 00:37:09.724 Initializing NVMe Controllers 00:37:09.724 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:09.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:09.724 Initialization complete. Launching workers. 00:37:09.724 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8595, failed: 0 00:37:09.724 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1268, failed to submit 7327 00:37:09.724 success 337, unsuccess 931, failed 0 00:37:09.724 15:55:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:09.724 15:55:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:09.724 EAL: No free 2048 kB hugepages reported on node 1 00:37:12.252 Initializing NVMe Controllers 00:37:12.252 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:12.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:12.252 Initialization complete. Launching workers. 00:37:12.252 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31187, failed: 0 00:37:12.252 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2668, failed to submit 28519 00:37:12.252 success 522, unsuccess 2146, failed 0 00:37:12.252 15:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:12.252 15:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:12.252 15:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:12.252 15:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:12.252 15:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:12.252 15:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:12.252 15:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:13.624 15:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.624 15:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1501917 00:37:13.624 15:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 1501917 ']' 00:37:13.624 15:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 1501917 00:37:13.624 15:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:37:13.624 15:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:13.624 15:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1501917 00:37:13.624 15:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:13.624 15:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:13.624 15:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1501917' 00:37:13.624 killing process with pid 1501917 00:37:13.624 15:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 1501917 00:37:13.624 [2024-05-15 15:55:26.639870] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:37:13.624 15:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 1501917 00:37:13.882 00:37:13.882 real 0m14.083s 00:37:13.882 user 0m53.404s 00:37:13.882 sys 0m2.597s 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:13.882 ************************************ 00:37:13.882 END TEST spdk_target_abort 00:37:13.882 ************************************ 00:37:13.882 15:55:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:13.882 15:55:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:13.882 15:55:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:13.882 15:55:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:13.882 ************************************ 00:37:13.882 START TEST kernel_target_abort 00:37:13.882 ************************************ 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:13.882 15:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:15.255 Waiting for block devices as requested 00:37:15.255 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:15.255 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:15.255 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:15.514 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:15.514 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:15.514 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:15.514 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:15.772 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:15.772 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:37:15.772 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:16.030 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:16.030 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:16.030 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:16.030 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:16.288 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:16.288 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:16.288 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:16.547 No valid GPT data, bailing 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:37:16.547 00:37:16.547 Discovery Log Number of Records 2, Generation counter 2 00:37:16.547 =====Discovery Log Entry 0====== 00:37:16.547 trtype: tcp 00:37:16.547 adrfam: ipv4 00:37:16.547 subtype: current discovery subsystem 00:37:16.547 treq: not specified, sq flow control disable supported 00:37:16.547 portid: 1 00:37:16.547 trsvcid: 4420 00:37:16.547 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:16.547 traddr: 10.0.0.1 00:37:16.547 eflags: none 00:37:16.547 sectype: none 00:37:16.547 =====Discovery Log Entry 1====== 00:37:16.547 trtype: tcp 00:37:16.547 adrfam: ipv4 00:37:16.547 subtype: nvme subsystem 00:37:16.547 treq: not specified, sq flow control disable supported 00:37:16.547 portid: 1 00:37:16.547 trsvcid: 4420 00:37:16.547 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:16.547 traddr: 10.0.0.1 00:37:16.547 eflags: none 00:37:16.547 sectype: none 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:16.547 15:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:16.547 EAL: No free 2048 kB hugepages reported on node 1 00:37:19.830 Initializing NVMe Controllers 00:37:19.830 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:19.830 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:19.830 Initialization complete. Launching workers. 00:37:19.830 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35315, failed: 0 00:37:19.830 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35315, failed to submit 0 00:37:19.830 success 0, unsuccess 35315, failed 0 00:37:19.830 15:55:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:19.830 15:55:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:19.830 EAL: No free 2048 kB hugepages reported on node 1 00:37:23.166 Initializing NVMe Controllers 00:37:23.166 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:23.166 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:23.166 Initialization complete. Launching workers. 00:37:23.166 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68968, failed: 0 00:37:23.166 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17382, failed to submit 51586 00:37:23.166 success 0, unsuccess 17382, failed 0 00:37:23.166 15:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:23.166 15:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:23.166 EAL: No free 2048 kB hugepages reported on node 1 00:37:25.694 Initializing NVMe Controllers 00:37:25.695 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:25.695 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:25.695 Initialization complete. Launching workers. 00:37:25.695 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67329, failed: 0 00:37:25.695 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16822, failed to submit 50507 00:37:25.695 success 0, unsuccess 16822, failed 0 00:37:25.695 15:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:25.695 15:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:25.695 15:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:37:25.695 15:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:25.695 15:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:25.695 15:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:25.695 15:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:25.695 15:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:37:25.695 15:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:37:25.695 15:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:27.067 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:37:27.067 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:37:27.067 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:37:27.067 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:37:27.067 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:37:27.067 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:37:27.067 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:37:27.067 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:27.067 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:37:27.067 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:37:27.067 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:37:27.067 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:37:27.067 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:37:27.067 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:37:27.067 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:37:27.067 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:28.002 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:37:28.002 00:37:28.002 real 0m14.193s 00:37:28.002 user 0m5.572s 00:37:28.002 sys 0m3.455s 00:37:28.002 15:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:28.260 15:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:28.260 ************************************ 00:37:28.260 END TEST kernel_target_abort 00:37:28.260 ************************************ 00:37:28.260 15:55:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:28.260 15:55:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:28.260 15:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:28.260 15:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:37:28.260 15:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:28.260 15:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:37:28.260 15:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:28.260 15:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:28.260 rmmod nvme_tcp 00:37:28.260 rmmod nvme_fabrics 00:37:28.260 rmmod nvme_keyring 00:37:28.260 15:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:28.260 15:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:37:28.260 15:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:37:28.260 15:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1501917 ']' 00:37:28.260 15:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1501917 00:37:28.260 15:55:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 1501917 ']' 00:37:28.260 15:55:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 1501917 00:37:28.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1501917) - No such process 00:37:28.260 15:55:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 1501917 is not found' 00:37:28.260 Process with pid 1501917 is not found 00:37:28.260 15:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:37:28.260 15:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:29.637 Waiting for block devices as requested 00:37:29.637 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:29.637 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:29.637 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:29.637 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:29.637 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:29.637 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:29.897 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:29.897 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:29.897 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:37:30.156 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:30.156 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:30.156 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:30.156 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:30.415 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:30.415 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:30.415 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:30.415 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:30.675 15:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:30.675 15:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:30.675 15:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:30.675 15:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:30.675 15:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:30.675 15:55:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:30.675 15:55:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:32.580 15:55:45 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:32.580 00:37:32.580 real 0m38.308s 00:37:32.580 user 1m1.273s 00:37:32.580 sys 0m9.915s 00:37:32.580 15:55:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:32.580 15:55:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:32.580 ************************************ 00:37:32.580 END TEST nvmf_abort_qd_sizes 00:37:32.580 ************************************ 00:37:32.580 15:55:45 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:32.580 15:55:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:32.580 15:55:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:32.580 15:55:45 -- common/autotest_common.sh@10 -- # set +x 00:37:32.580 ************************************ 00:37:32.580 START TEST keyring_file 00:37:32.580 ************************************ 00:37:32.580 15:55:45 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:32.580 * Looking for test storage... 00:37:32.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:32.580 15:55:45 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:32.580 15:55:45 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:32.580 15:55:45 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:32.580 15:55:45 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:32.580 15:55:45 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:32.580 15:55:45 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:32.580 15:55:45 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:32.580 15:55:45 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:32.580 15:55:45 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:32.580 15:55:45 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:32.580 15:55:45 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:32.580 15:55:45 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:32.580 15:55:45 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:32.580 15:55:45 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:37:32.580 15:55:45 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:37:32.580 15:55:45 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:32.580 15:55:45 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:32.580 15:55:45 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:32.580 15:55:45 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:32.580 15:55:45 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:32.580 15:55:45 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:32.580 15:55:45 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:32.838 15:55:45 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:32.838 15:55:45 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:32.838 15:55:45 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:32.838 15:55:45 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:32.838 15:55:45 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:32.838 15:55:45 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:32.838 15:55:45 keyring_file -- nvmf/common.sh@47 -- # : 0 00:37:32.838 15:55:45 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:32.838 15:55:45 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:32.838 15:55:45 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:32.838 15:55:45 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:32.838 15:55:45 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:32.838 15:55:45 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:32.838 15:55:45 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:32.838 15:55:45 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:32.838 15:55:45 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:32.838 15:55:45 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:32.838 15:55:45 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:32.838 15:55:45 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:32.838 15:55:45 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:32.838 15:55:45 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:32.838 15:55:45 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:32.838 15:55:45 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:32.838 15:55:45 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:32.838 15:55:45 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:32.838 15:55:45 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:32.838 15:55:45 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:32.838 15:55:45 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.47W4W4osAk 00:37:32.838 15:55:45 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:32.838 15:55:45 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:32.838 15:55:45 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:32.838 15:55:45 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:32.838 15:55:45 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:32.838 15:55:45 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:32.838 15:55:45 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:32.838 15:55:45 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.47W4W4osAk 00:37:32.838 15:55:45 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.47W4W4osAk 00:37:32.838 15:55:45 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.47W4W4osAk 00:37:32.838 15:55:45 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:32.838 15:55:45 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:32.838 15:55:45 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:32.838 15:55:45 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:32.838 15:55:45 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:32.838 15:55:45 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:32.839 15:55:45 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.JViEGqRzht 00:37:32.839 15:55:45 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:32.839 15:55:45 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:32.839 15:55:45 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:32.839 15:55:45 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:32.839 15:55:45 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:32.839 15:55:45 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:32.839 15:55:45 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:32.839 15:55:45 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.JViEGqRzht 00:37:32.839 15:55:45 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.JViEGqRzht 00:37:32.839 15:55:45 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.JViEGqRzht 00:37:32.839 15:55:45 keyring_file -- keyring/file.sh@30 -- # tgtpid=1507960 00:37:32.839 15:55:45 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:32.839 15:55:45 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1507960 00:37:32.839 15:55:45 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1507960 ']' 00:37:32.839 15:55:45 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:32.839 15:55:45 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:32.839 15:55:45 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:32.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:32.839 15:55:45 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:32.839 15:55:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:32.839 [2024-05-15 15:55:45.825649] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:37:32.839 [2024-05-15 15:55:45.825733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507960 ] 00:37:32.839 EAL: No free 2048 kB hugepages reported on node 1 00:37:32.839 [2024-05-15 15:55:45.863973] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:32.839 [2024-05-15 15:55:45.899980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:33.096 [2024-05-15 15:55:45.989094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:33.354 15:55:46 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:33.354 15:55:46 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:33.354 15:55:46 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:33.354 15:55:46 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.354 15:55:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:33.354 [2024-05-15 15:55:46.253464] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:33.354 null0 00:37:33.354 [2024-05-15 15:55:46.285504] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:37:33.354 [2024-05-15 15:55:46.285573] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:33.354 [2024-05-15 15:55:46.286147] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:33.354 [2024-05-15 15:55:46.293555] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:37:33.354 15:55:46 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.354 15:55:46 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:33.354 15:55:46 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:33.354 15:55:46 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:33.354 15:55:46 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:33.354 15:55:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:33.354 15:55:46 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:33.354 15:55:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:33.354 15:55:46 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:33.354 15:55:46 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.354 15:55:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:33.354 [2024-05-15 15:55:46.305603] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:33.354 request: 00:37:33.354 { 00:37:33.354 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:33.354 "secure_channel": false, 00:37:33.354 "listen_address": { 00:37:33.354 "trtype": "tcp", 00:37:33.354 "traddr": "127.0.0.1", 00:37:33.354 "trsvcid": "4420" 00:37:33.354 }, 00:37:33.354 "method": "nvmf_subsystem_add_listener", 00:37:33.354 "req_id": 1 00:37:33.354 } 00:37:33.354 Got JSON-RPC error response 00:37:33.354 response: 00:37:33.354 { 00:37:33.354 "code": -32602, 00:37:33.354 "message": "Invalid parameters" 00:37:33.354 } 00:37:33.354 15:55:46 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:33.354 15:55:46 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:33.354 15:55:46 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:33.354 15:55:46 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:33.354 15:55:46 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:33.354 15:55:46 keyring_file -- keyring/file.sh@46 -- # bperfpid=1507975 00:37:33.354 15:55:46 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1507975 /var/tmp/bperf.sock 00:37:33.354 15:55:46 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:33.355 15:55:46 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1507975 ']' 00:37:33.355 15:55:46 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:33.355 15:55:46 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:33.355 15:55:46 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:33.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:33.355 15:55:46 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:33.355 15:55:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:33.355 [2024-05-15 15:55:46.355187] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:37:33.355 [2024-05-15 15:55:46.355296] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507975 ] 00:37:33.355 EAL: No free 2048 kB hugepages reported on node 1 00:37:33.355 [2024-05-15 15:55:46.394996] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:33.355 [2024-05-15 15:55:46.431187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:33.612 [2024-05-15 15:55:46.519491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:33.612 15:55:46 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:33.612 15:55:46 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:33.612 15:55:46 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.47W4W4osAk 00:37:33.612 15:55:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.47W4W4osAk 00:37:33.869 15:55:46 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.JViEGqRzht 00:37:33.869 15:55:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.JViEGqRzht 00:37:34.127 15:55:47 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:37:34.127 15:55:47 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:37:34.127 15:55:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:34.127 15:55:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.127 15:55:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:34.386 15:55:47 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.47W4W4osAk == \/\t\m\p\/\t\m\p\.\4\7\W\4\W\4\o\s\A\k ]] 00:37:34.386 15:55:47 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:37:34.386 15:55:47 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:34.386 15:55:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:34.386 15:55:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.386 15:55:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:34.645 15:55:47 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.JViEGqRzht == \/\t\m\p\/\t\m\p\.\J\V\i\E\G\q\R\z\h\t ]] 00:37:34.645 15:55:47 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:37:34.645 15:55:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:34.645 15:55:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:34.645 15:55:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:34.645 15:55:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:34.645 15:55:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.903 15:55:47 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:37:34.903 15:55:47 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:37:34.903 15:55:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:34.903 15:55:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:34.903 15:55:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:34.903 15:55:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:34.903 15:55:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:35.160 15:55:48 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:35.160 15:55:48 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:35.160 15:55:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:35.418 [2024-05-15 15:55:48.331634] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:35.418 nvme0n1 00:37:35.418 15:55:48 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:37:35.418 15:55:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:35.418 15:55:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:35.418 15:55:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:35.418 15:55:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:35.418 15:55:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:35.676 15:55:48 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:37:35.676 15:55:48 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:37:35.676 15:55:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:35.677 15:55:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:35.677 15:55:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:35.677 15:55:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:35.677 15:55:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:35.935 15:55:48 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:37:35.935 15:55:48 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:35.935 Running I/O for 1 seconds... 00:37:37.308 00:37:37.308 Latency(us) 00:37:37.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:37.308 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:37.308 nvme0n1 : 1.02 5299.19 20.70 0.00 0.00 23886.95 5485.61 28544.57 00:37:37.308 =================================================================================================================== 00:37:37.308 Total : 5299.19 20.70 0.00 0.00 23886.95 5485.61 28544.57 00:37:37.308 0 00:37:37.308 15:55:50 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:37.308 15:55:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:37.308 15:55:50 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:37:37.308 15:55:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:37.308 15:55:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:37.308 15:55:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:37.308 15:55:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:37.308 15:55:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:37.566 15:55:50 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:37:37.566 15:55:50 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:37:37.566 15:55:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:37.566 15:55:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:37.566 15:55:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:37.566 15:55:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:37.566 15:55:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:37.825 15:55:50 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:37.825 15:55:50 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:37.825 15:55:50 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:37.825 15:55:50 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:37.825 15:55:50 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:37.825 15:55:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:37.825 15:55:50 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:37.825 15:55:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:37.825 15:55:50 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:37.825 15:55:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:38.083 [2024-05-15 15:55:51.039872] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:38.083 [2024-05-15 15:55:51.040489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x179d3b0 (107): Transport endpoint is not connected 00:37:38.083 [2024-05-15 15:55:51.041478] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x179d3b0 (9): Bad file descriptor 00:37:38.083 [2024-05-15 15:55:51.042478] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:38.083 [2024-05-15 15:55:51.042519] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:38.083 [2024-05-15 15:55:51.042534] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:38.083 request: 00:37:38.083 { 00:37:38.083 "name": "nvme0", 00:37:38.083 "trtype": "tcp", 00:37:38.083 "traddr": "127.0.0.1", 00:37:38.083 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:38.083 "adrfam": "ipv4", 00:37:38.083 "trsvcid": "4420", 00:37:38.083 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:38.083 "psk": "key1", 00:37:38.083 "method": "bdev_nvme_attach_controller", 00:37:38.083 "req_id": 1 00:37:38.083 } 00:37:38.083 Got JSON-RPC error response 00:37:38.083 response: 00:37:38.083 { 00:37:38.083 "code": -32602, 00:37:38.083 "message": "Invalid parameters" 00:37:38.083 } 00:37:38.083 15:55:51 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:38.083 15:55:51 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:38.083 15:55:51 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:38.083 15:55:51 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:38.083 15:55:51 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:37:38.083 15:55:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:38.083 15:55:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:38.083 15:55:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:38.083 15:55:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:38.083 15:55:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:38.349 15:55:51 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:37:38.349 15:55:51 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:37:38.349 15:55:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:38.349 15:55:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:38.349 15:55:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:38.349 15:55:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:38.349 15:55:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:38.613 15:55:51 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:38.613 15:55:51 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:37:38.613 15:55:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:38.871 15:55:51 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:37:38.871 15:55:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:39.129 15:55:52 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:37:39.129 15:55:52 keyring_file -- keyring/file.sh@77 -- # jq length 00:37:39.129 15:55:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:39.387 15:55:52 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:37:39.387 15:55:52 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.47W4W4osAk 00:37:39.387 15:55:52 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.47W4W4osAk 00:37:39.387 15:55:52 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:39.387 15:55:52 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.47W4W4osAk 00:37:39.387 15:55:52 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:39.387 15:55:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:39.387 15:55:52 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:39.387 15:55:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:39.387 15:55:52 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.47W4W4osAk 00:37:39.387 15:55:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.47W4W4osAk 00:37:39.645 [2024-05-15 15:55:52.524614] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.47W4W4osAk': 0100660 00:37:39.645 [2024-05-15 15:55:52.524656] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:39.645 request: 00:37:39.645 { 00:37:39.645 "name": "key0", 00:37:39.645 "path": "/tmp/tmp.47W4W4osAk", 00:37:39.645 "method": "keyring_file_add_key", 00:37:39.645 "req_id": 1 00:37:39.645 } 00:37:39.645 Got JSON-RPC error response 00:37:39.645 response: 00:37:39.645 { 00:37:39.645 "code": -1, 00:37:39.645 "message": "Operation not permitted" 00:37:39.645 } 00:37:39.645 15:55:52 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:39.645 15:55:52 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:39.645 15:55:52 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:39.645 15:55:52 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:39.645 15:55:52 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.47W4W4osAk 00:37:39.645 15:55:52 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.47W4W4osAk 00:37:39.645 15:55:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.47W4W4osAk 00:37:39.903 15:55:52 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.47W4W4osAk 00:37:39.903 15:55:52 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:37:39.903 15:55:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:39.903 15:55:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:39.903 15:55:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:39.903 15:55:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:39.903 15:55:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:40.161 15:55:53 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:37:40.161 15:55:53 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:40.161 15:55:53 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:40.161 15:55:53 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:40.161 15:55:53 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:40.161 15:55:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:40.161 15:55:53 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:40.161 15:55:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:40.161 15:55:53 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:40.162 15:55:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:40.162 [2024-05-15 15:55:53.258591] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.47W4W4osAk': No such file or directory 00:37:40.162 [2024-05-15 15:55:53.258629] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:40.162 [2024-05-15 15:55:53.258661] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:40.162 [2024-05-15 15:55:53.258675] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:40.162 [2024-05-15 15:55:53.258688] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:40.419 request: 00:37:40.419 { 00:37:40.419 "name": "nvme0", 00:37:40.419 "trtype": "tcp", 00:37:40.419 "traddr": "127.0.0.1", 00:37:40.419 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:40.419 "adrfam": "ipv4", 00:37:40.419 "trsvcid": "4420", 00:37:40.419 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:40.419 "psk": "key0", 00:37:40.419 "method": "bdev_nvme_attach_controller", 00:37:40.419 "req_id": 1 00:37:40.419 } 00:37:40.419 Got JSON-RPC error response 00:37:40.419 response: 00:37:40.419 { 00:37:40.419 "code": -19, 00:37:40.419 "message": "No such device" 00:37:40.419 } 00:37:40.419 15:55:53 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:40.419 15:55:53 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:40.419 15:55:53 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:40.419 15:55:53 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:40.419 15:55:53 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:37:40.419 15:55:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:40.419 15:55:53 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:40.419 15:55:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:40.419 15:55:53 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:40.419 15:55:53 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:40.419 15:55:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:40.419 15:55:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:40.419 15:55:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.V08T9Z6r1m 00:37:40.419 15:55:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:40.702 15:55:53 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:40.702 15:55:53 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:40.702 15:55:53 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:40.702 15:55:53 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:40.702 15:55:53 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:40.702 15:55:53 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:40.702 15:55:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.V08T9Z6r1m 00:37:40.702 15:55:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.V08T9Z6r1m 00:37:40.702 15:55:53 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.V08T9Z6r1m 00:37:40.702 15:55:53 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.V08T9Z6r1m 00:37:40.702 15:55:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.V08T9Z6r1m 00:37:40.959 15:55:53 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:40.959 15:55:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:41.217 nvme0n1 00:37:41.217 15:55:54 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:37:41.217 15:55:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:41.217 15:55:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:41.217 15:55:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:41.217 15:55:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:41.217 15:55:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:41.475 15:55:54 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:37:41.475 15:55:54 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:37:41.475 15:55:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:41.733 15:55:54 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:37:41.733 15:55:54 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:37:41.733 15:55:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:41.733 15:55:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:41.733 15:55:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:41.991 15:55:54 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:37:41.991 15:55:54 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:37:41.991 15:55:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:41.991 15:55:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:41.991 15:55:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:41.991 15:55:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:41.991 15:55:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:42.249 15:55:55 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:37:42.249 15:55:55 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:42.249 15:55:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:42.507 15:55:55 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:37:42.507 15:55:55 keyring_file -- keyring/file.sh@104 -- # jq length 00:37:42.507 15:55:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:42.765 15:55:55 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:37:42.765 15:55:55 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.V08T9Z6r1m 00:37:42.765 15:55:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.V08T9Z6r1m 00:37:42.765 15:55:55 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.JViEGqRzht 00:37:42.765 15:55:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.JViEGqRzht 00:37:43.022 15:55:56 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:43.022 15:55:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:43.589 nvme0n1 00:37:43.589 15:55:56 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:37:43.589 15:55:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:43.848 15:55:56 keyring_file -- keyring/file.sh@112 -- # config='{ 00:37:43.848 "subsystems": [ 00:37:43.848 { 00:37:43.848 "subsystem": "keyring", 00:37:43.848 "config": [ 00:37:43.848 { 00:37:43.848 "method": "keyring_file_add_key", 00:37:43.848 "params": { 00:37:43.848 "name": "key0", 00:37:43.848 "path": "/tmp/tmp.V08T9Z6r1m" 00:37:43.848 } 00:37:43.848 }, 00:37:43.848 { 00:37:43.848 "method": "keyring_file_add_key", 00:37:43.848 "params": { 00:37:43.848 "name": "key1", 00:37:43.848 "path": "/tmp/tmp.JViEGqRzht" 00:37:43.848 } 00:37:43.848 } 00:37:43.848 ] 00:37:43.848 }, 00:37:43.848 { 00:37:43.848 "subsystem": "iobuf", 00:37:43.848 "config": [ 00:37:43.848 { 00:37:43.848 "method": "iobuf_set_options", 00:37:43.848 "params": { 00:37:43.848 "small_pool_count": 8192, 00:37:43.848 "large_pool_count": 1024, 00:37:43.848 "small_bufsize": 8192, 00:37:43.848 "large_bufsize": 135168 00:37:43.848 } 00:37:43.848 } 00:37:43.848 ] 00:37:43.848 }, 00:37:43.848 { 00:37:43.848 "subsystem": "sock", 00:37:43.848 "config": [ 00:37:43.848 { 00:37:43.848 "method": "sock_impl_set_options", 00:37:43.848 "params": { 00:37:43.848 "impl_name": "posix", 00:37:43.848 "recv_buf_size": 2097152, 00:37:43.848 "send_buf_size": 2097152, 00:37:43.848 "enable_recv_pipe": true, 00:37:43.848 "enable_quickack": false, 00:37:43.848 "enable_placement_id": 0, 00:37:43.848 "enable_zerocopy_send_server": true, 00:37:43.848 "enable_zerocopy_send_client": false, 00:37:43.848 "zerocopy_threshold": 0, 00:37:43.848 "tls_version": 0, 00:37:43.848 "enable_ktls": false 00:37:43.848 } 00:37:43.848 }, 00:37:43.848 { 00:37:43.848 "method": "sock_impl_set_options", 00:37:43.848 "params": { 00:37:43.848 "impl_name": "ssl", 00:37:43.848 "recv_buf_size": 4096, 00:37:43.848 "send_buf_size": 4096, 00:37:43.848 "enable_recv_pipe": true, 00:37:43.848 "enable_quickack": false, 00:37:43.848 "enable_placement_id": 0, 00:37:43.848 "enable_zerocopy_send_server": true, 00:37:43.848 "enable_zerocopy_send_client": false, 00:37:43.848 "zerocopy_threshold": 0, 00:37:43.848 "tls_version": 0, 00:37:43.848 "enable_ktls": false 00:37:43.848 } 00:37:43.848 } 00:37:43.848 ] 00:37:43.848 }, 00:37:43.848 { 00:37:43.848 "subsystem": "vmd", 00:37:43.848 "config": [] 00:37:43.848 }, 00:37:43.848 { 00:37:43.848 "subsystem": "accel", 00:37:43.848 "config": [ 00:37:43.848 { 00:37:43.848 "method": "accel_set_options", 00:37:43.848 "params": { 00:37:43.848 "small_cache_size": 128, 00:37:43.848 "large_cache_size": 16, 00:37:43.848 "task_count": 2048, 00:37:43.848 "sequence_count": 2048, 00:37:43.848 "buf_count": 2048 00:37:43.848 } 00:37:43.848 } 00:37:43.848 ] 00:37:43.848 }, 00:37:43.848 { 00:37:43.848 "subsystem": "bdev", 00:37:43.848 "config": [ 00:37:43.848 { 00:37:43.848 "method": "bdev_set_options", 00:37:43.848 "params": { 00:37:43.848 "bdev_io_pool_size": 65535, 00:37:43.848 "bdev_io_cache_size": 256, 00:37:43.848 "bdev_auto_examine": true, 00:37:43.848 "iobuf_small_cache_size": 128, 00:37:43.848 "iobuf_large_cache_size": 16 00:37:43.848 } 00:37:43.848 }, 00:37:43.848 { 00:37:43.848 "method": "bdev_raid_set_options", 00:37:43.848 "params": { 00:37:43.848 "process_window_size_kb": 1024 00:37:43.848 } 00:37:43.848 }, 00:37:43.848 { 00:37:43.848 "method": "bdev_iscsi_set_options", 00:37:43.848 "params": { 00:37:43.848 "timeout_sec": 30 00:37:43.848 } 00:37:43.848 }, 00:37:43.848 { 00:37:43.848 "method": "bdev_nvme_set_options", 00:37:43.848 "params": { 00:37:43.848 "action_on_timeout": "none", 00:37:43.848 "timeout_us": 0, 00:37:43.848 "timeout_admin_us": 0, 00:37:43.848 "keep_alive_timeout_ms": 10000, 00:37:43.848 "arbitration_burst": 0, 00:37:43.848 "low_priority_weight": 0, 00:37:43.848 "medium_priority_weight": 0, 00:37:43.848 "high_priority_weight": 0, 00:37:43.848 "nvme_adminq_poll_period_us": 10000, 00:37:43.848 "nvme_ioq_poll_period_us": 0, 00:37:43.848 "io_queue_requests": 512, 00:37:43.848 "delay_cmd_submit": true, 00:37:43.848 "transport_retry_count": 4, 00:37:43.848 "bdev_retry_count": 3, 00:37:43.848 "transport_ack_timeout": 0, 00:37:43.848 "ctrlr_loss_timeout_sec": 0, 00:37:43.848 "reconnect_delay_sec": 0, 00:37:43.848 "fast_io_fail_timeout_sec": 0, 00:37:43.848 "disable_auto_failback": false, 00:37:43.848 "generate_uuids": false, 00:37:43.848 "transport_tos": 0, 00:37:43.848 "nvme_error_stat": false, 00:37:43.848 "rdma_srq_size": 0, 00:37:43.848 "io_path_stat": false, 00:37:43.848 "allow_accel_sequence": false, 00:37:43.848 "rdma_max_cq_size": 0, 00:37:43.848 "rdma_cm_event_timeout_ms": 0, 00:37:43.848 "dhchap_digests": [ 00:37:43.848 "sha256", 00:37:43.848 "sha384", 00:37:43.848 "sha512" 00:37:43.848 ], 00:37:43.848 "dhchap_dhgroups": [ 00:37:43.848 "null", 00:37:43.848 "ffdhe2048", 00:37:43.848 "ffdhe3072", 00:37:43.848 "ffdhe4096", 00:37:43.848 "ffdhe6144", 00:37:43.848 "ffdhe8192" 00:37:43.848 ] 00:37:43.848 } 00:37:43.848 }, 00:37:43.848 { 00:37:43.848 "method": "bdev_nvme_attach_controller", 00:37:43.848 "params": { 00:37:43.848 "name": "nvme0", 00:37:43.848 "trtype": "TCP", 00:37:43.848 "adrfam": "IPv4", 00:37:43.848 "traddr": "127.0.0.1", 00:37:43.848 "trsvcid": "4420", 00:37:43.848 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:43.848 "prchk_reftag": false, 00:37:43.848 "prchk_guard": false, 00:37:43.848 "ctrlr_loss_timeout_sec": 0, 00:37:43.848 "reconnect_delay_sec": 0, 00:37:43.848 "fast_io_fail_timeout_sec": 0, 00:37:43.848 "psk": "key0", 00:37:43.848 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:43.848 "hdgst": false, 00:37:43.848 "ddgst": false 00:37:43.848 } 00:37:43.848 }, 00:37:43.848 { 00:37:43.848 "method": "bdev_nvme_set_hotplug", 00:37:43.848 "params": { 00:37:43.848 "period_us": 100000, 00:37:43.848 "enable": false 00:37:43.848 } 00:37:43.848 }, 00:37:43.848 { 00:37:43.848 "method": "bdev_wait_for_examine" 00:37:43.848 } 00:37:43.848 ] 00:37:43.848 }, 00:37:43.848 { 00:37:43.848 "subsystem": "nbd", 00:37:43.848 "config": [] 00:37:43.848 } 00:37:43.848 ] 00:37:43.848 }' 00:37:43.848 15:55:56 keyring_file -- keyring/file.sh@114 -- # killprocess 1507975 00:37:43.848 15:55:56 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1507975 ']' 00:37:43.848 15:55:56 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1507975 00:37:43.848 15:55:56 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:43.848 15:55:56 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:43.848 15:55:56 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1507975 00:37:43.848 15:55:56 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:43.849 15:55:56 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:43.849 15:55:56 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1507975' 00:37:43.849 killing process with pid 1507975 00:37:43.849 15:55:56 keyring_file -- common/autotest_common.sh@965 -- # kill 1507975 00:37:43.849 Received shutdown signal, test time was about 1.000000 seconds 00:37:43.849 00:37:43.849 Latency(us) 00:37:43.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:43.849 =================================================================================================================== 00:37:43.849 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:43.849 15:55:56 keyring_file -- common/autotest_common.sh@970 -- # wait 1507975 00:37:44.107 15:55:56 keyring_file -- keyring/file.sh@117 -- # bperfpid=1509318 00:37:44.107 15:55:56 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1509318 /var/tmp/bperf.sock 00:37:44.107 15:55:56 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1509318 ']' 00:37:44.107 15:55:56 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:44.108 15:55:56 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:44.108 15:55:56 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:44.108 15:55:56 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:44.108 15:55:56 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:37:44.108 "subsystems": [ 00:37:44.108 { 00:37:44.108 "subsystem": "keyring", 00:37:44.108 "config": [ 00:37:44.108 { 00:37:44.108 "method": "keyring_file_add_key", 00:37:44.108 "params": { 00:37:44.108 "name": "key0", 00:37:44.108 "path": "/tmp/tmp.V08T9Z6r1m" 00:37:44.108 } 00:37:44.108 }, 00:37:44.108 { 00:37:44.108 "method": "keyring_file_add_key", 00:37:44.108 "params": { 00:37:44.108 "name": "key1", 00:37:44.108 "path": "/tmp/tmp.JViEGqRzht" 00:37:44.108 } 00:37:44.108 } 00:37:44.108 ] 00:37:44.108 }, 00:37:44.108 { 00:37:44.108 "subsystem": "iobuf", 00:37:44.108 "config": [ 00:37:44.108 { 00:37:44.108 "method": "iobuf_set_options", 00:37:44.108 "params": { 00:37:44.108 "small_pool_count": 8192, 00:37:44.108 "large_pool_count": 1024, 00:37:44.108 "small_bufsize": 8192, 00:37:44.108 "large_bufsize": 135168 00:37:44.108 } 00:37:44.108 } 00:37:44.108 ] 00:37:44.108 }, 00:37:44.108 { 00:37:44.108 "subsystem": "sock", 00:37:44.108 "config": [ 00:37:44.108 { 00:37:44.108 "method": "sock_impl_set_options", 00:37:44.108 "params": { 00:37:44.108 "impl_name": "posix", 00:37:44.108 "recv_buf_size": 2097152, 00:37:44.108 "send_buf_size": 2097152, 00:37:44.108 "enable_recv_pipe": true, 00:37:44.108 "enable_quickack": false, 00:37:44.108 "enable_placement_id": 0, 00:37:44.108 "enable_zerocopy_send_server": true, 00:37:44.108 "enable_zerocopy_send_client": false, 00:37:44.108 "zerocopy_threshold": 0, 00:37:44.108 "tls_version": 0, 00:37:44.108 "enable_ktls": false 00:37:44.108 } 00:37:44.108 }, 00:37:44.108 { 00:37:44.108 "method": "sock_impl_set_options", 00:37:44.108 "params": { 00:37:44.108 "impl_name": "ssl", 00:37:44.108 "recv_buf_size": 4096, 00:37:44.108 "send_buf_size": 4096, 00:37:44.108 "enable_recv_pipe": true, 00:37:44.108 "enable_quickack": false, 00:37:44.108 "enable_placement_id": 0, 00:37:44.108 "enable_zerocopy_send_server": true, 00:37:44.108 "enable_zerocopy_send_client": false, 00:37:44.108 "zerocopy_threshold": 0, 00:37:44.108 "tls_version": 0, 00:37:44.108 "enable_ktls": false 00:37:44.108 } 00:37:44.108 } 00:37:44.108 ] 00:37:44.108 }, 00:37:44.108 { 00:37:44.108 "subsystem": "vmd", 00:37:44.108 "config": [] 00:37:44.108 }, 00:37:44.108 { 00:37:44.108 "subsystem": "accel", 00:37:44.108 "config": [ 00:37:44.108 { 00:37:44.108 "method": "accel_set_options", 00:37:44.108 "params": { 00:37:44.108 "small_cache_size": 128, 00:37:44.108 "large_cache_size": 16, 00:37:44.108 "task_count": 2048, 00:37:44.108 "sequence_count": 2048, 00:37:44.108 "buf_count": 2048 00:37:44.108 } 00:37:44.108 } 00:37:44.108 ] 00:37:44.108 }, 00:37:44.108 { 00:37:44.108 "subsystem": "bdev", 00:37:44.108 "config": [ 00:37:44.108 { 00:37:44.108 "method": "bdev_set_options", 00:37:44.108 "params": { 00:37:44.108 "bdev_io_pool_size": 65535, 00:37:44.108 "bdev_io_cache_size": 256, 00:37:44.108 "bdev_auto_examine": true, 00:37:44.108 "iobuf_small_cache_size": 128, 00:37:44.108 "iobuf_large_cache_size": 16 00:37:44.108 } 00:37:44.108 }, 00:37:44.108 { 00:37:44.108 "method": "bdev_raid_set_options", 00:37:44.108 "params": { 00:37:44.108 "process_window_size_kb": 1024 00:37:44.108 } 00:37:44.108 }, 00:37:44.108 { 00:37:44.108 "method": "bdev_iscsi_set_options", 00:37:44.108 "params": { 00:37:44.108 "timeout_sec": 30 00:37:44.108 } 00:37:44.108 }, 00:37:44.108 { 00:37:44.108 "method": "bdev_nvme_set_options", 00:37:44.108 "params": { 00:37:44.108 "action_on_timeout": "none", 00:37:44.108 "timeout_us": 0, 00:37:44.108 "timeout_admin_us": 0, 00:37:44.108 "keep_alive_timeout_ms": 10000, 00:37:44.108 "arbitration_burst": 0, 00:37:44.108 "low_priority_weight": 0, 00:37:44.108 "medium_priority_weight": 0, 00:37:44.108 "high_priority_weight": 0, 00:37:44.108 "nvme_adminq_poll_period_us": 10000, 00:37:44.108 "nvme_ioq_poll_period_us": 0, 00:37:44.108 "io_queue_requests": 512, 00:37:44.108 "delay_cmd_submit": true, 00:37:44.108 "transport_retry_count": 4, 00:37:44.108 "bdev_retry_count": 3, 00:37:44.108 "transport_ack_timeout": 0, 00:37:44.108 "ctrlr_loss_timeout_sec": 0, 00:37:44.108 "reconnect_delay_sec": 0, 00:37:44.108 "fast_io_fail_timeout_sec": 0, 00:37:44.108 "disable_auto_failback": false, 00:37:44.108 "generate_uuids": false, 00:37:44.108 "transport_tos": 0, 00:37:44.108 "nvme_error_stat": false, 00:37:44.108 "rdma_srq_size": 0, 00:37:44.108 "io_path_stat": false, 00:37:44.108 "allow_accel_sequence": false, 00:37:44.108 "rdma_max_cq_size": 0, 00:37:44.108 "rdma_cm_event_timeout_ms": 0, 00:37:44.108 "dhchap_digests": [ 00:37:44.108 "sha256", 00:37:44.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:44.108 "sha384", 00:37:44.108 "sha512" 00:37:44.108 ], 00:37:44.108 "dhchap_dhgroups": [ 00:37:44.108 "null", 00:37:44.108 "ffdhe2048", 00:37:44.108 "ffdhe3072", 00:37:44.108 "ffdhe4096", 00:37:44.108 "ffdhe6144", 00:37:44.108 "ffdhe8192" 00:37:44.108 ] 00:37:44.108 } 00:37:44.108 }, 00:37:44.108 { 00:37:44.108 "method": "bdev_nvme_attach_controller", 00:37:44.108 "params": { 00:37:44.108 "name": "nvme0", 00:37:44.108 "trtype": "TCP", 00:37:44.108 "adrfam": "IPv4", 00:37:44.108 "traddr": "127.0.0.1", 00:37:44.108 "trsvcid": "4420", 00:37:44.108 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:44.108 "prchk_reftag": false, 00:37:44.108 "prchk_guard": false, 00:37:44.108 "ctrlr_loss_timeout_sec": 0, 00:37:44.108 "reconnect_delay_sec": 0, 00:37:44.108 "fast_io_fail_timeout_sec": 0, 00:37:44.108 "psk": "key0", 00:37:44.108 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:44.108 "hdgst": false, 00:37:44.108 "ddgst": false 00:37:44.108 } 00:37:44.108 }, 00:37:44.108 { 00:37:44.108 "method": "bdev_nvme_set_hotplug", 00:37:44.108 "params": { 00:37:44.108 "period_us": 100000, 00:37:44.108 "enable": false 00:37:44.108 } 00:37:44.108 }, 00:37:44.108 { 00:37:44.108 "method": "bdev_wait_for_examine" 00:37:44.108 } 00:37:44.108 ] 00:37:44.108 }, 00:37:44.108 { 00:37:44.108 "subsystem": "nbd", 00:37:44.108 "config": [] 00:37:44.108 } 00:37:44.108 ] 00:37:44.108 }' 00:37:44.108 15:55:56 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:44.108 15:55:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:44.108 [2024-05-15 15:55:57.022146] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:37:44.108 [2024-05-15 15:55:57.022258] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509318 ] 00:37:44.108 EAL: No free 2048 kB hugepages reported on node 1 00:37:44.108 [2024-05-15 15:55:57.061735] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:44.108 [2024-05-15 15:55:57.095359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:44.108 [2024-05-15 15:55:57.177992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:44.367 [2024-05-15 15:55:57.355103] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:44.933 15:55:57 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:44.933 15:55:57 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:44.933 15:55:57 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:44.933 15:55:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:44.933 15:55:57 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:45.191 15:55:58 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:45.191 15:55:58 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:45.191 15:55:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:45.191 15:55:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:45.191 15:55:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:45.191 15:55:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:45.191 15:55:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:45.448 15:55:58 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:45.448 15:55:58 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:45.448 15:55:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:45.448 15:55:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:45.448 15:55:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:45.448 15:55:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:45.448 15:55:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:45.706 15:55:58 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:45.706 15:55:58 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:45.706 15:55:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:45.706 15:55:58 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:45.963 15:55:58 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:45.963 15:55:58 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:45.963 15:55:58 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.V08T9Z6r1m /tmp/tmp.JViEGqRzht 00:37:45.963 15:55:58 keyring_file -- keyring/file.sh@20 -- # killprocess 1509318 00:37:45.963 15:55:58 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1509318 ']' 00:37:45.963 15:55:58 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1509318 00:37:45.963 15:55:58 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:45.963 15:55:58 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:45.963 15:55:58 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1509318 00:37:45.963 15:55:58 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:45.963 15:55:58 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:45.963 15:55:58 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1509318' 00:37:45.963 killing process with pid 1509318 00:37:45.963 15:55:58 keyring_file -- common/autotest_common.sh@965 -- # kill 1509318 00:37:45.963 Received shutdown signal, test time was about 1.000000 seconds 00:37:45.963 00:37:45.963 Latency(us) 00:37:45.963 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:45.963 =================================================================================================================== 00:37:45.963 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:45.963 15:55:58 keyring_file -- common/autotest_common.sh@970 -- # wait 1509318 00:37:46.221 15:55:59 keyring_file -- keyring/file.sh@21 -- # killprocess 1507960 00:37:46.221 15:55:59 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1507960 ']' 00:37:46.221 15:55:59 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1507960 00:37:46.221 15:55:59 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:46.221 15:55:59 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:46.221 15:55:59 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1507960 00:37:46.221 15:55:59 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:46.221 15:55:59 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:46.221 15:55:59 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1507960' 00:37:46.221 killing process with pid 1507960 00:37:46.221 15:55:59 keyring_file -- common/autotest_common.sh@965 -- # kill 1507960 00:37:46.221 [2024-05-15 15:55:59.217093] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:37:46.221 [2024-05-15 15:55:59.217141] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:46.221 15:55:59 keyring_file -- common/autotest_common.sh@970 -- # wait 1507960 00:37:46.787 00:37:46.787 real 0m13.970s 00:37:46.787 user 0m34.715s 00:37:46.787 sys 0m3.245s 00:37:46.787 15:55:59 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:46.787 15:55:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:46.787 ************************************ 00:37:46.787 END TEST keyring_file 00:37:46.787 ************************************ 00:37:46.787 15:55:59 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:37:46.787 15:55:59 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:37:46.787 15:55:59 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:46.787 15:55:59 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:46.787 15:55:59 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:37:46.787 15:55:59 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:37:46.787 15:55:59 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:37:46.787 15:55:59 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:46.787 15:55:59 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:46.787 15:55:59 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:46.787 15:55:59 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:37:46.787 15:55:59 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:46.787 15:55:59 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:37:46.787 15:55:59 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:46.787 15:55:59 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:46.787 15:55:59 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:46.787 15:55:59 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:37:46.787 15:55:59 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:37:46.787 15:55:59 -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:46.787 15:55:59 -- common/autotest_common.sh@10 -- # set +x 00:37:46.787 15:55:59 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:37:46.787 15:55:59 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:37:46.787 15:55:59 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:37:46.787 15:55:59 -- common/autotest_common.sh@10 -- # set +x 00:37:48.688 INFO: APP EXITING 00:37:48.688 INFO: killing all VMs 00:37:48.688 INFO: killing vhost app 00:37:48.688 INFO: EXIT DONE 00:37:49.623 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:49.623 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:49.623 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:49.623 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:49.623 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:49.623 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:49.623 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:49.623 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:49.623 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:37:49.623 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:49.881 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:49.881 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:49.881 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:49.881 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:49.881 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:49.881 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:49.881 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:51.254 Cleaning 00:37:51.254 Removing: /var/run/dpdk/spdk0/config 00:37:51.254 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:51.254 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:51.254 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:51.254 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:51.254 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:51.254 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:51.254 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:51.254 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:51.254 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:51.254 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:51.254 Removing: /var/run/dpdk/spdk1/config 00:37:51.254 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:51.254 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:51.254 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:51.254 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:51.254 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:51.254 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:51.254 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:51.254 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:51.254 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:51.254 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:51.254 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:51.254 Removing: /var/run/dpdk/spdk2/config 00:37:51.254 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:51.254 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:51.254 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:51.254 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:51.254 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:51.254 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:51.255 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:51.255 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:51.255 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:51.255 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:51.255 Removing: /var/run/dpdk/spdk3/config 00:37:51.255 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:51.255 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:51.255 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:51.255 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:51.255 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:51.255 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:51.255 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:51.255 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:51.255 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:51.255 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:51.255 Removing: /var/run/dpdk/spdk4/config 00:37:51.255 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:51.255 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:51.255 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:51.255 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:51.255 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:51.516 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:51.516 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:51.516 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:51.516 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:51.516 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:51.516 Removing: /dev/shm/bdev_svc_trace.1 00:37:51.516 Removing: /dev/shm/nvmf_trace.0 00:37:51.516 Removing: /dev/shm/spdk_tgt_trace.pid1174832 00:37:51.516 Removing: /var/run/dpdk/spdk0 00:37:51.516 Removing: /var/run/dpdk/spdk1 00:37:51.516 Removing: /var/run/dpdk/spdk2 00:37:51.516 Removing: /var/run/dpdk/spdk3 00:37:51.516 Removing: /var/run/dpdk/spdk4 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1173284 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1174014 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1174832 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1175267 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1175953 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1176100 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1176812 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1176828 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1177070 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1178374 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1179303 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1179490 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1179793 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1179993 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1180181 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1180344 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1180508 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1180688 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1181261 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1183612 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1183777 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1183951 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1183964 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1184393 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1184397 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1184734 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1184833 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1185003 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1185132 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1185295 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1185307 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1185672 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1185833 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1186145 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1186315 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1186337 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1186474 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1186678 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1186837 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1186992 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1187273 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1187425 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1187584 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1187846 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1188013 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1188173 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1188326 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1188603 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1188761 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1188919 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1189189 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1189355 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1189509 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1189696 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1189945 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1190101 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1190261 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1190452 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1190656 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1193123 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1248407 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1251313 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1258441 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1262347 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1265385 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1265789 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1273618 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1273620 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1274276 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1274837 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1275474 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1275879 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1275995 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1276138 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1276266 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1276273 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1276945 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1277482 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1278140 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1278536 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1278550 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1278799 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1279683 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1280407 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1286044 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1286316 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1289118 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1293216 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1295885 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1302843 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1308739 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1309928 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1310596 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1321906 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1324415 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1348418 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1351613 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1352668 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1353986 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1354126 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1354257 00:37:51.516 Removing: /var/run/dpdk/spdk_pid1354273 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1354707 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1356128 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1356893 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1357671 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1359285 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1359591 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1360150 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1362956 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1366635 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1370173 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1394814 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1397446 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1401626 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1402570 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1403538 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1406496 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1409134 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1413929 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1413936 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1417105 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1417242 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1417380 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1417646 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1417651 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1418724 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1419908 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1421267 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1422994 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1424170 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1425351 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1429317 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1429646 00:37:51.774 Removing: /var/run/dpdk/spdk_pid1430775 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1431253 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1435104 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1436959 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1440659 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1444530 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1451034 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1456506 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1456508 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1469513 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1469926 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1470327 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1470738 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1471315 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1471725 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1472158 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1472655 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1475360 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1475581 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1479657 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1479827 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1481434 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1486657 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1486759 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1490150 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1492051 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1493453 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1494225 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1495592 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1496465 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1502276 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1502616 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1503006 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1504652 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1504934 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1505328 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1507960 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1507975 00:37:51.775 Removing: /var/run/dpdk/spdk_pid1509318 00:37:51.775 Clean 00:37:51.775 15:56:04 -- common/autotest_common.sh@1447 -- # return 0 00:37:51.775 15:56:04 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:37:51.775 15:56:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:51.775 15:56:04 -- common/autotest_common.sh@10 -- # set +x 00:37:51.775 15:56:04 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:37:51.775 15:56:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:51.775 15:56:04 -- common/autotest_common.sh@10 -- # set +x 00:37:51.775 15:56:04 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:51.775 15:56:04 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:51.775 15:56:04 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:51.775 15:56:04 -- spdk/autotest.sh@387 -- # hash lcov 00:37:51.775 15:56:04 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:51.775 15:56:04 -- spdk/autotest.sh@389 -- # hostname 00:37:51.775 15:56:04 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:52.037 geninfo: WARNING: invalid characters removed from testname! 00:38:24.097 15:56:32 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:24.097 15:56:36 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:26.647 15:56:39 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:29.928 15:56:42 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:32.466 15:56:45 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:35.740 15:56:48 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:38.266 15:56:51 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:38.266 15:56:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:38.266 15:56:51 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:38.266 15:56:51 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:38.266 15:56:51 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:38.266 15:56:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.266 15:56:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.266 15:56:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.266 15:56:51 -- paths/export.sh@5 -- $ export PATH 00:38:38.266 15:56:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.266 15:56:51 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:38.266 15:56:51 -- common/autobuild_common.sh@437 -- $ date +%s 00:38:38.266 15:56:51 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715781411.XXXXXX 00:38:38.266 15:56:51 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715781411.owfVRp 00:38:38.266 15:56:51 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:38:38.266 15:56:51 -- common/autobuild_common.sh@443 -- $ '[' -n main ']' 00:38:38.266 15:56:51 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:38:38.266 15:56:51 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:38:38.266 15:56:51 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:38.266 15:56:51 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:38.266 15:56:51 -- common/autobuild_common.sh@453 -- $ get_config_params 00:38:38.266 15:56:51 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:38:38.266 15:56:51 -- common/autotest_common.sh@10 -- $ set +x 00:38:38.266 15:56:51 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:38:38.266 15:56:51 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:38:38.266 15:56:51 -- pm/common@17 -- $ local monitor 00:38:38.266 15:56:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:38.266 15:56:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:38.266 15:56:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:38.266 15:56:51 -- pm/common@21 -- $ date +%s 00:38:38.266 15:56:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:38.266 15:56:51 -- pm/common@21 -- $ date +%s 00:38:38.266 15:56:51 -- pm/common@25 -- $ sleep 1 00:38:38.266 15:56:51 -- pm/common@21 -- $ date +%s 00:38:38.266 15:56:51 -- pm/common@21 -- $ date +%s 00:38:38.266 15:56:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715781411 00:38:38.266 15:56:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715781411 00:38:38.266 15:56:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715781411 00:38:38.266 15:56:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715781411 00:38:38.266 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715781411_collect-vmstat.pm.log 00:38:38.266 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715781411_collect-cpu-load.pm.log 00:38:38.266 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715781411_collect-cpu-temp.pm.log 00:38:38.266 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715781411_collect-bmc-pm.bmc.pm.log 00:38:39.203 15:56:52 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:38:39.203 15:56:52 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:38:39.203 15:56:52 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:39.203 15:56:52 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:39.203 15:56:52 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:39.203 15:56:52 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:39.203 15:56:52 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:39.203 15:56:52 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:39.203 15:56:52 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:39.203 15:56:52 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:39.203 15:56:52 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:39.203 15:56:52 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:39.203 15:56:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:39.203 15:56:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:39.203 15:56:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:39.203 15:56:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:39.203 15:56:52 -- pm/common@44 -- $ pid=1520550 00:38:39.203 15:56:52 -- pm/common@50 -- $ kill -TERM 1520550 00:38:39.203 15:56:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:39.203 15:56:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:39.203 15:56:52 -- pm/common@44 -- $ pid=1520552 00:38:39.203 15:56:52 -- pm/common@50 -- $ kill -TERM 1520552 00:38:39.203 15:56:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:39.203 15:56:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:39.203 15:56:52 -- pm/common@44 -- $ pid=1520554 00:38:39.203 15:56:52 -- pm/common@50 -- $ kill -TERM 1520554 00:38:39.203 15:56:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:39.203 15:56:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:39.203 15:56:52 -- pm/common@44 -- $ pid=1520590 00:38:39.203 15:56:52 -- pm/common@50 -- $ sudo -E kill -TERM 1520590 00:38:39.203 + [[ -n 1064570 ]] 00:38:39.203 + sudo kill 1064570 00:38:39.213 [Pipeline] } 00:38:39.231 [Pipeline] // stage 00:38:39.236 [Pipeline] } 00:38:39.254 [Pipeline] // timeout 00:38:39.260 [Pipeline] } 00:38:39.277 [Pipeline] // catchError 00:38:39.282 [Pipeline] } 00:38:39.299 [Pipeline] // wrap 00:38:39.305 [Pipeline] } 00:38:39.320 [Pipeline] // catchError 00:38:39.328 [Pipeline] stage 00:38:39.330 [Pipeline] { (Epilogue) 00:38:39.344 [Pipeline] catchError 00:38:39.346 [Pipeline] { 00:38:39.359 [Pipeline] echo 00:38:39.360 Cleanup processes 00:38:39.366 [Pipeline] sh 00:38:39.646 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:39.646 1520706 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:39.646 1520815 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:39.659 [Pipeline] sh 00:38:39.938 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:39.938 ++ grep -v 'sudo pgrep' 00:38:39.938 ++ awk '{print $1}' 00:38:39.938 + sudo kill -9 1520706 00:38:39.949 [Pipeline] sh 00:38:40.225 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:50.198 [Pipeline] sh 00:38:50.477 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:50.477 Artifacts sizes are good 00:38:50.493 [Pipeline] archiveArtifacts 00:38:50.501 Archiving artifacts 00:38:50.680 [Pipeline] sh 00:38:50.958 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:50.972 [Pipeline] cleanWs 00:38:50.981 [WS-CLEANUP] Deleting project workspace... 00:38:50.981 [WS-CLEANUP] Deferred wipeout is used... 00:38:50.988 [WS-CLEANUP] done 00:38:50.989 [Pipeline] } 00:38:51.007 [Pipeline] // catchError 00:38:51.019 [Pipeline] sh 00:38:51.296 + logger -p user.info -t JENKINS-CI 00:38:51.304 [Pipeline] } 00:38:51.320 [Pipeline] // stage 00:38:51.325 [Pipeline] } 00:38:51.343 [Pipeline] // node 00:38:51.348 [Pipeline] End of Pipeline 00:38:51.381 Finished: SUCCESS